Nov 25 17:53:09 np0005535838 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 25 17:53:09 np0005535838 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 25 17:53:09 np0005535838 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 17:53:09 np0005535838 kernel: BIOS-provided physical RAM map:
Nov 25 17:53:09 np0005535838 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 25 17:53:09 np0005535838 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 25 17:53:09 np0005535838 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 25 17:53:09 np0005535838 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 25 17:53:09 np0005535838 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 25 17:53:09 np0005535838 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 25 17:53:09 np0005535838 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 25 17:53:09 np0005535838 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 25 17:53:09 np0005535838 kernel: NX (Execute Disable) protection: active
Nov 25 17:53:09 np0005535838 kernel: APIC: Static calls initialized
Nov 25 17:53:09 np0005535838 kernel: SMBIOS 2.8 present.
Nov 25 17:53:09 np0005535838 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 25 17:53:09 np0005535838 kernel: Hypervisor detected: KVM
Nov 25 17:53:09 np0005535838 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 25 17:53:09 np0005535838 kernel: kvm-clock: using sched offset of 4746104976 cycles
Nov 25 17:53:09 np0005535838 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 25 17:53:09 np0005535838 kernel: tsc: Detected 2799.998 MHz processor
Nov 25 17:53:09 np0005535838 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 25 17:53:09 np0005535838 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 25 17:53:09 np0005535838 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 25 17:53:09 np0005535838 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 25 17:53:09 np0005535838 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 25 17:53:09 np0005535838 kernel: Using GB pages for direct mapping
Nov 25 17:53:09 np0005535838 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 25 17:53:09 np0005535838 kernel: ACPI: Early table checksum verification disabled
Nov 25 17:53:09 np0005535838 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 25 17:53:09 np0005535838 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 17:53:09 np0005535838 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 17:53:09 np0005535838 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 17:53:09 np0005535838 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 25 17:53:09 np0005535838 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 17:53:09 np0005535838 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 17:53:09 np0005535838 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 25 17:53:09 np0005535838 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 25 17:53:09 np0005535838 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 25 17:53:09 np0005535838 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 25 17:53:09 np0005535838 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 25 17:53:09 np0005535838 kernel: No NUMA configuration found
Nov 25 17:53:09 np0005535838 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 25 17:53:09 np0005535838 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Nov 25 17:53:09 np0005535838 kernel: crashkernel reserved: 0x00000000a9000000 - 0x00000000b9000000 (256 MB)
Nov 25 17:53:09 np0005535838 kernel: Zone ranges:
Nov 25 17:53:09 np0005535838 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 25 17:53:09 np0005535838 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 25 17:53:09 np0005535838 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 17:53:09 np0005535838 kernel:  Device   empty
Nov 25 17:53:09 np0005535838 kernel: Movable zone start for each node
Nov 25 17:53:09 np0005535838 kernel: Early memory node ranges
Nov 25 17:53:09 np0005535838 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 25 17:53:09 np0005535838 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 25 17:53:09 np0005535838 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 17:53:09 np0005535838 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 25 17:53:09 np0005535838 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 25 17:53:09 np0005535838 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 25 17:53:09 np0005535838 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 25 17:53:09 np0005535838 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 25 17:53:09 np0005535838 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 25 17:53:09 np0005535838 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 25 17:53:09 np0005535838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 25 17:53:09 np0005535838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 25 17:53:09 np0005535838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 25 17:53:09 np0005535838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 25 17:53:09 np0005535838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 25 17:53:09 np0005535838 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 25 17:53:09 np0005535838 kernel: TSC deadline timer available
Nov 25 17:53:09 np0005535838 kernel: CPU topo: Max. logical packages:   8
Nov 25 17:53:09 np0005535838 kernel: CPU topo: Max. logical dies:       8
Nov 25 17:53:09 np0005535838 kernel: CPU topo: Max. dies per package:   1
Nov 25 17:53:09 np0005535838 kernel: CPU topo: Max. threads per core:   1
Nov 25 17:53:09 np0005535838 kernel: CPU topo: Num. cores per package:     1
Nov 25 17:53:09 np0005535838 kernel: CPU topo: Num. threads per package:   1
Nov 25 17:53:09 np0005535838 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 25 17:53:09 np0005535838 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 25 17:53:09 np0005535838 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 25 17:53:09 np0005535838 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 25 17:53:09 np0005535838 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 25 17:53:09 np0005535838 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 25 17:53:09 np0005535838 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 25 17:53:09 np0005535838 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 25 17:53:09 np0005535838 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 25 17:53:09 np0005535838 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 25 17:53:09 np0005535838 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 25 17:53:09 np0005535838 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 25 17:53:09 np0005535838 kernel: Booting paravirtualized kernel on KVM
Nov 25 17:53:09 np0005535838 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 25 17:53:09 np0005535838 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 25 17:53:09 np0005535838 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 25 17:53:09 np0005535838 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 25 17:53:09 np0005535838 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 17:53:09 np0005535838 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 25 17:53:09 np0005535838 kernel: random: crng init done
Nov 25 17:53:09 np0005535838 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 25 17:53:09 np0005535838 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 25 17:53:09 np0005535838 kernel: Fallback order for Node 0: 0 
Nov 25 17:53:09 np0005535838 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 25 17:53:09 np0005535838 kernel: Policy zone: Normal
Nov 25 17:53:09 np0005535838 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 25 17:53:09 np0005535838 kernel: software IO TLB: area num 8.
Nov 25 17:53:09 np0005535838 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 25 17:53:09 np0005535838 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 25 17:53:09 np0005535838 kernel: ftrace: allocated 193 pages with 3 groups
Nov 25 17:53:09 np0005535838 kernel: Dynamic Preempt: voluntary
Nov 25 17:53:09 np0005535838 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 25 17:53:09 np0005535838 kernel: rcu: #011RCU event tracing is enabled.
Nov 25 17:53:09 np0005535838 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 25 17:53:09 np0005535838 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 25 17:53:09 np0005535838 kernel: #011Rude variant of Tasks RCU enabled.
Nov 25 17:53:09 np0005535838 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 25 17:53:09 np0005535838 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 25 17:53:09 np0005535838 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 25 17:53:09 np0005535838 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 17:53:09 np0005535838 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 17:53:09 np0005535838 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 17:53:09 np0005535838 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 25 17:53:09 np0005535838 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 25 17:53:09 np0005535838 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 25 17:53:09 np0005535838 kernel: Console: colour VGA+ 80x25
Nov 25 17:53:09 np0005535838 kernel: printk: console [ttyS0] enabled
Nov 25 17:53:09 np0005535838 kernel: ACPI: Core revision 20230331
Nov 25 17:53:09 np0005535838 kernel: APIC: Switch to symmetric I/O mode setup
Nov 25 17:53:09 np0005535838 kernel: x2apic enabled
Nov 25 17:53:09 np0005535838 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 25 17:53:09 np0005535838 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 25 17:53:09 np0005535838 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 25 17:53:09 np0005535838 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 25 17:53:09 np0005535838 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 25 17:53:09 np0005535838 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 25 17:53:09 np0005535838 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 25 17:53:09 np0005535838 kernel: Spectre V2 : Mitigation: Retpolines
Nov 25 17:53:09 np0005535838 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 25 17:53:09 np0005535838 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 25 17:53:09 np0005535838 kernel: RETBleed: Mitigation: untrained return thunk
Nov 25 17:53:09 np0005535838 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 25 17:53:09 np0005535838 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 25 17:53:09 np0005535838 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 25 17:53:09 np0005535838 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 25 17:53:09 np0005535838 kernel: x86/bugs: return thunk changed
Nov 25 17:53:09 np0005535838 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 25 17:53:09 np0005535838 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 25 17:53:09 np0005535838 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 25 17:53:09 np0005535838 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 25 17:53:09 np0005535838 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 25 17:53:09 np0005535838 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 25 17:53:09 np0005535838 kernel: Freeing SMP alternatives memory: 40K
Nov 25 17:53:09 np0005535838 kernel: pid_max: default: 32768 minimum: 301
Nov 25 17:53:09 np0005535838 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 25 17:53:09 np0005535838 kernel: landlock: Up and running.
Nov 25 17:53:09 np0005535838 kernel: Yama: becoming mindful.
Nov 25 17:53:09 np0005535838 kernel: SELinux:  Initializing.
Nov 25 17:53:09 np0005535838 kernel: LSM support for eBPF active
Nov 25 17:53:09 np0005535838 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 17:53:09 np0005535838 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 17:53:09 np0005535838 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 25 17:53:09 np0005535838 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 25 17:53:09 np0005535838 kernel: ... version:                0
Nov 25 17:53:09 np0005535838 kernel: ... bit width:              48
Nov 25 17:53:09 np0005535838 kernel: ... generic registers:      6
Nov 25 17:53:09 np0005535838 kernel: ... value mask:             0000ffffffffffff
Nov 25 17:53:09 np0005535838 kernel: ... max period:             00007fffffffffff
Nov 25 17:53:09 np0005535838 kernel: ... fixed-purpose events:   0
Nov 25 17:53:09 np0005535838 kernel: ... event mask:             000000000000003f
Nov 25 17:53:09 np0005535838 kernel: signal: max sigframe size: 1776
Nov 25 17:53:09 np0005535838 kernel: rcu: Hierarchical SRCU implementation.
Nov 25 17:53:09 np0005535838 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 25 17:53:09 np0005535838 kernel: smp: Bringing up secondary CPUs ...
Nov 25 17:53:09 np0005535838 kernel: smpboot: x86: Booting SMP configuration:
Nov 25 17:53:09 np0005535838 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 25 17:53:09 np0005535838 kernel: smp: Brought up 1 node, 8 CPUs
Nov 25 17:53:09 np0005535838 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 25 17:53:09 np0005535838 kernel: node 0 deferred pages initialised in 10ms
Nov 25 17:53:09 np0005535838 kernel: Memory: 7765840K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616272K reserved, 0K cma-reserved)
Nov 25 17:53:09 np0005535838 kernel: devtmpfs: initialized
Nov 25 17:53:09 np0005535838 kernel: x86/mm: Memory block size: 128MB
Nov 25 17:53:09 np0005535838 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 25 17:53:09 np0005535838 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 25 17:53:09 np0005535838 kernel: pinctrl core: initialized pinctrl subsystem
Nov 25 17:53:09 np0005535838 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 25 17:53:09 np0005535838 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 25 17:53:09 np0005535838 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 25 17:53:09 np0005535838 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 25 17:53:09 np0005535838 kernel: audit: initializing netlink subsys (disabled)
Nov 25 17:53:09 np0005535838 kernel: audit: type=2000 audit(1764111187.269:1): state=initialized audit_enabled=0 res=1
Nov 25 17:53:09 np0005535838 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 25 17:53:09 np0005535838 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 25 17:53:09 np0005535838 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 25 17:53:09 np0005535838 kernel: cpuidle: using governor menu
Nov 25 17:53:09 np0005535838 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 25 17:53:09 np0005535838 kernel: PCI: Using configuration type 1 for base access
Nov 25 17:53:09 np0005535838 kernel: PCI: Using configuration type 1 for extended access
Nov 25 17:53:09 np0005535838 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 25 17:53:09 np0005535838 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 25 17:53:09 np0005535838 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 25 17:53:09 np0005535838 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 25 17:53:09 np0005535838 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 25 17:53:09 np0005535838 kernel: Demotion targets for Node 0: null
Nov 25 17:53:09 np0005535838 kernel: cryptd: max_cpu_qlen set to 1000
Nov 25 17:53:09 np0005535838 kernel: ACPI: Added _OSI(Module Device)
Nov 25 17:53:09 np0005535838 kernel: ACPI: Added _OSI(Processor Device)
Nov 25 17:53:09 np0005535838 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 25 17:53:09 np0005535838 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 25 17:53:09 np0005535838 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 25 17:53:09 np0005535838 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 25 17:53:09 np0005535838 kernel: ACPI: Interpreter enabled
Nov 25 17:53:09 np0005535838 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 25 17:53:09 np0005535838 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 25 17:53:09 np0005535838 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 25 17:53:09 np0005535838 kernel: PCI: Using E820 reservations for host bridge windows
Nov 25 17:53:09 np0005535838 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 25 17:53:09 np0005535838 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 25 17:53:09 np0005535838 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [3] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [4] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [5] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [6] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [7] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [8] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [9] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [10] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [11] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [12] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [13] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [14] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [15] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [16] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [17] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [18] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [19] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [20] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [21] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [22] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [23] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [24] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [25] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [26] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [27] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [28] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [29] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [30] registered
Nov 25 17:53:09 np0005535838 kernel: acpiphp: Slot [31] registered
Nov 25 17:53:09 np0005535838 kernel: PCI host bridge to bus 0000:00
Nov 25 17:53:09 np0005535838 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 25 17:53:09 np0005535838 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 25 17:53:09 np0005535838 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 25 17:53:09 np0005535838 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 25 17:53:09 np0005535838 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 25 17:53:09 np0005535838 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 25 17:53:09 np0005535838 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 25 17:53:09 np0005535838 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 25 17:53:09 np0005535838 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 25 17:53:09 np0005535838 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 25 17:53:09 np0005535838 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 25 17:53:09 np0005535838 kernel: iommu: Default domain type: Translated
Nov 25 17:53:09 np0005535838 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 25 17:53:09 np0005535838 kernel: SCSI subsystem initialized
Nov 25 17:53:09 np0005535838 kernel: ACPI: bus type USB registered
Nov 25 17:53:09 np0005535838 kernel: usbcore: registered new interface driver usbfs
Nov 25 17:53:09 np0005535838 kernel: usbcore: registered new interface driver hub
Nov 25 17:53:09 np0005535838 kernel: usbcore: registered new device driver usb
Nov 25 17:53:09 np0005535838 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 25 17:53:09 np0005535838 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 25 17:53:09 np0005535838 kernel: PTP clock support registered
Nov 25 17:53:09 np0005535838 kernel: EDAC MC: Ver: 3.0.0
Nov 25 17:53:09 np0005535838 kernel: NetLabel: Initializing
Nov 25 17:53:09 np0005535838 kernel: NetLabel:  domain hash size = 128
Nov 25 17:53:09 np0005535838 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 25 17:53:09 np0005535838 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 25 17:53:09 np0005535838 kernel: PCI: Using ACPI for IRQ routing
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 25 17:53:09 np0005535838 kernel: vgaarb: loaded
Nov 25 17:53:09 np0005535838 kernel: clocksource: Switched to clocksource kvm-clock
Nov 25 17:53:09 np0005535838 kernel: VFS: Disk quotas dquot_6.6.0
Nov 25 17:53:09 np0005535838 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 25 17:53:09 np0005535838 kernel: pnp: PnP ACPI init
Nov 25 17:53:09 np0005535838 kernel: pnp: PnP ACPI: found 5 devices
Nov 25 17:53:09 np0005535838 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 25 17:53:09 np0005535838 kernel: NET: Registered PF_INET protocol family
Nov 25 17:53:09 np0005535838 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 25 17:53:09 np0005535838 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 25 17:53:09 np0005535838 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 25 17:53:09 np0005535838 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 25 17:53:09 np0005535838 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 25 17:53:09 np0005535838 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 25 17:53:09 np0005535838 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 25 17:53:09 np0005535838 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 17:53:09 np0005535838 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 17:53:09 np0005535838 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 25 17:53:09 np0005535838 kernel: NET: Registered PF_XDP protocol family
Nov 25 17:53:09 np0005535838 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 25 17:53:09 np0005535838 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 25 17:53:09 np0005535838 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 25 17:53:09 np0005535838 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 25 17:53:09 np0005535838 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 25 17:53:09 np0005535838 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 25 17:53:09 np0005535838 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 80882 usecs
Nov 25 17:53:09 np0005535838 kernel: PCI: CLS 0 bytes, default 64
Nov 25 17:53:09 np0005535838 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 25 17:53:09 np0005535838 kernel: software IO TLB: mapped [mem 0x00000000a5000000-0x00000000a9000000] (64MB)
Nov 25 17:53:09 np0005535838 kernel: ACPI: bus type thunderbolt registered
Nov 25 17:53:09 np0005535838 kernel: Trying to unpack rootfs image as initramfs...
Nov 25 17:53:09 np0005535838 kernel: Initialise system trusted keyrings
Nov 25 17:53:09 np0005535838 kernel: Key type blacklist registered
Nov 25 17:53:09 np0005535838 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 25 17:53:09 np0005535838 kernel: zbud: loaded
Nov 25 17:53:09 np0005535838 kernel: integrity: Platform Keyring initialized
Nov 25 17:53:09 np0005535838 kernel: integrity: Machine keyring initialized
Nov 25 17:53:09 np0005535838 kernel: Freeing initrd memory: 85868K
Nov 25 17:53:09 np0005535838 kernel: NET: Registered PF_ALG protocol family
Nov 25 17:53:09 np0005535838 kernel: xor: automatically using best checksumming function   avx       
Nov 25 17:53:09 np0005535838 kernel: Key type asymmetric registered
Nov 25 17:53:09 np0005535838 kernel: Asymmetric key parser 'x509' registered
Nov 25 17:53:09 np0005535838 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 25 17:53:09 np0005535838 kernel: io scheduler mq-deadline registered
Nov 25 17:53:09 np0005535838 kernel: io scheduler kyber registered
Nov 25 17:53:09 np0005535838 kernel: io scheduler bfq registered
Nov 25 17:53:09 np0005535838 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 25 17:53:09 np0005535838 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 25 17:53:09 np0005535838 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 25 17:53:09 np0005535838 kernel: ACPI: button: Power Button [PWRF]
Nov 25 17:53:09 np0005535838 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 25 17:53:09 np0005535838 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 25 17:53:09 np0005535838 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 25 17:53:09 np0005535838 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 25 17:53:09 np0005535838 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 25 17:53:09 np0005535838 kernel: Non-volatile memory driver v1.3
Nov 25 17:53:09 np0005535838 kernel: rdac: device handler registered
Nov 25 17:53:09 np0005535838 kernel: hp_sw: device handler registered
Nov 25 17:53:09 np0005535838 kernel: emc: device handler registered
Nov 25 17:53:09 np0005535838 kernel: alua: device handler registered
Nov 25 17:53:09 np0005535838 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 25 17:53:09 np0005535838 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 25 17:53:09 np0005535838 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 25 17:53:09 np0005535838 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 25 17:53:09 np0005535838 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 25 17:53:09 np0005535838 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 25 17:53:09 np0005535838 kernel: usb usb1: Product: UHCI Host Controller
Nov 25 17:53:09 np0005535838 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 25 17:53:09 np0005535838 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 25 17:53:09 np0005535838 kernel: hub 1-0:1.0: USB hub found
Nov 25 17:53:09 np0005535838 kernel: hub 1-0:1.0: 2 ports detected
Nov 25 17:53:09 np0005535838 kernel: usbcore: registered new interface driver usbserial_generic
Nov 25 17:53:09 np0005535838 kernel: usbserial: USB Serial support registered for generic
Nov 25 17:53:09 np0005535838 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 25 17:53:09 np0005535838 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 25 17:53:09 np0005535838 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 25 17:53:09 np0005535838 kernel: mousedev: PS/2 mouse device common for all mice
Nov 25 17:53:09 np0005535838 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 25 17:53:09 np0005535838 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 25 17:53:09 np0005535838 kernel: rtc_cmos 00:04: registered as rtc0
Nov 25 17:53:09 np0005535838 kernel: rtc_cmos 00:04: setting system clock to 2025-11-25T22:53:08 UTC (1764111188)
Nov 25 17:53:09 np0005535838 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 25 17:53:09 np0005535838 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 25 17:53:09 np0005535838 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 25 17:53:09 np0005535838 kernel: usbcore: registered new interface driver usbhid
Nov 25 17:53:09 np0005535838 kernel: usbhid: USB HID core driver
Nov 25 17:53:09 np0005535838 kernel: drop_monitor: Initializing network drop monitor service
Nov 25 17:53:09 np0005535838 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 25 17:53:09 np0005535838 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 25 17:53:09 np0005535838 kernel: Initializing XFRM netlink socket
Nov 25 17:53:09 np0005535838 kernel: NET: Registered PF_INET6 protocol family
Nov 25 17:53:09 np0005535838 kernel: Segment Routing with IPv6
Nov 25 17:53:09 np0005535838 kernel: NET: Registered PF_PACKET protocol family
Nov 25 17:53:09 np0005535838 kernel: mpls_gso: MPLS GSO support
Nov 25 17:53:09 np0005535838 kernel: IPI shorthand broadcast: enabled
Nov 25 17:53:09 np0005535838 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 25 17:53:09 np0005535838 kernel: AES CTR mode by8 optimization enabled
Nov 25 17:53:09 np0005535838 kernel: sched_clock: Marking stable (1258007193, 153474960)->(1545379434, -133897281)
Nov 25 17:53:09 np0005535838 kernel: registered taskstats version 1
Nov 25 17:53:09 np0005535838 kernel: Loading compiled-in X.509 certificates
Nov 25 17:53:09 np0005535838 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 17:53:09 np0005535838 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 25 17:53:09 np0005535838 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 25 17:53:09 np0005535838 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 25 17:53:09 np0005535838 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 25 17:53:09 np0005535838 kernel: Demotion targets for Node 0: null
Nov 25 17:53:09 np0005535838 kernel: page_owner is disabled
Nov 25 17:53:09 np0005535838 kernel: Key type .fscrypt registered
Nov 25 17:53:09 np0005535838 kernel: Key type fscrypt-provisioning registered
Nov 25 17:53:09 np0005535838 kernel: Key type big_key registered
Nov 25 17:53:09 np0005535838 kernel: Key type encrypted registered
Nov 25 17:53:09 np0005535838 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 25 17:53:09 np0005535838 kernel: Loading compiled-in module X.509 certificates
Nov 25 17:53:09 np0005535838 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 17:53:09 np0005535838 kernel: ima: Allocated hash algorithm: sha256
Nov 25 17:53:09 np0005535838 kernel: ima: No architecture policies found
Nov 25 17:53:09 np0005535838 kernel: evm: Initialising EVM extended attributes:
Nov 25 17:53:09 np0005535838 kernel: evm: security.selinux
Nov 25 17:53:09 np0005535838 kernel: evm: security.SMACK64 (disabled)
Nov 25 17:53:09 np0005535838 kernel: evm: security.SMACK64EXEC (disabled)
Nov 25 17:53:09 np0005535838 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 25 17:53:09 np0005535838 kernel: evm: security.SMACK64MMAP (disabled)
Nov 25 17:53:09 np0005535838 kernel: evm: security.apparmor (disabled)
Nov 25 17:53:09 np0005535838 kernel: evm: security.ima
Nov 25 17:53:09 np0005535838 kernel: evm: security.capability
Nov 25 17:53:09 np0005535838 kernel: evm: HMAC attrs: 0x1
Nov 25 17:53:09 np0005535838 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 25 17:53:09 np0005535838 kernel: Running certificate verification RSA selftest
Nov 25 17:53:09 np0005535838 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 25 17:53:09 np0005535838 kernel: Running certificate verification ECDSA selftest
Nov 25 17:53:09 np0005535838 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 25 17:53:09 np0005535838 kernel: clk: Disabling unused clocks
Nov 25 17:53:09 np0005535838 kernel: Freeing unused decrypted memory: 2028K
Nov 25 17:53:09 np0005535838 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 25 17:53:09 np0005535838 kernel: Write protecting the kernel read-only data: 30720k
Nov 25 17:53:09 np0005535838 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 25 17:53:09 np0005535838 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 25 17:53:09 np0005535838 kernel: Run /init as init process
Nov 25 17:53:09 np0005535838 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 17:53:09 np0005535838 systemd: Detected virtualization kvm.
Nov 25 17:53:09 np0005535838 systemd: Detected architecture x86-64.
Nov 25 17:53:09 np0005535838 systemd: Running in initrd.
Nov 25 17:53:09 np0005535838 systemd: No hostname configured, using default hostname.
Nov 25 17:53:09 np0005535838 systemd: Hostname set to <localhost>.
Nov 25 17:53:09 np0005535838 systemd: Initializing machine ID from VM UUID.
Nov 25 17:53:09 np0005535838 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 25 17:53:09 np0005535838 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 25 17:53:09 np0005535838 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 25 17:53:09 np0005535838 kernel: usb 1-1: Manufacturer: QEMU
Nov 25 17:53:09 np0005535838 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 25 17:53:09 np0005535838 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 25 17:53:09 np0005535838 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 25 17:53:09 np0005535838 systemd: Queued start job for default target Initrd Default Target.
Nov 25 17:53:09 np0005535838 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 17:53:09 np0005535838 systemd: Reached target Local Encrypted Volumes.
Nov 25 17:53:09 np0005535838 systemd: Reached target Initrd /usr File System.
Nov 25 17:53:09 np0005535838 systemd: Reached target Local File Systems.
Nov 25 17:53:09 np0005535838 systemd: Reached target Path Units.
Nov 25 17:53:09 np0005535838 systemd: Reached target Slice Units.
Nov 25 17:53:09 np0005535838 systemd: Reached target Swaps.
Nov 25 17:53:09 np0005535838 systemd: Reached target Timer Units.
Nov 25 17:53:09 np0005535838 systemd: Listening on D-Bus System Message Bus Socket.
Nov 25 17:53:09 np0005535838 systemd: Listening on Journal Socket (/dev/log).
Nov 25 17:53:09 np0005535838 systemd: Listening on Journal Socket.
Nov 25 17:53:09 np0005535838 systemd: Listening on udev Control Socket.
Nov 25 17:53:09 np0005535838 systemd: Listening on udev Kernel Socket.
Nov 25 17:53:09 np0005535838 systemd: Reached target Socket Units.
Nov 25 17:53:09 np0005535838 systemd: Starting Create List of Static Device Nodes...
Nov 25 17:53:09 np0005535838 systemd: Starting Journal Service...
Nov 25 17:53:09 np0005535838 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 17:53:09 np0005535838 systemd: Starting Apply Kernel Variables...
Nov 25 17:53:09 np0005535838 systemd: Starting Create System Users...
Nov 25 17:53:09 np0005535838 systemd: Starting Setup Virtual Console...
Nov 25 17:53:09 np0005535838 systemd: Finished Create List of Static Device Nodes.
Nov 25 17:53:09 np0005535838 systemd: Finished Apply Kernel Variables.
Nov 25 17:53:09 np0005535838 systemd: Finished Create System Users.
Nov 25 17:53:09 np0005535838 systemd-journald[303]: Journal started
Nov 25 17:53:09 np0005535838 systemd-journald[303]: Runtime Journal (/run/log/journal/99edd01fcb884b88a56d15f374f9d1d0) is 8.0M, max 153.6M, 145.6M free.
Nov 25 17:53:09 np0005535838 systemd-sysusers[307]: Creating group 'users' with GID 100.
Nov 25 17:53:09 np0005535838 systemd-sysusers[307]: Creating group 'dbus' with GID 81.
Nov 25 17:53:09 np0005535838 systemd-sysusers[307]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 25 17:53:09 np0005535838 systemd: Started Journal Service.
Nov 25 17:53:09 np0005535838 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 17:53:09 np0005535838 systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 17:53:09 np0005535838 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 17:53:09 np0005535838 systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 17:53:09 np0005535838 systemd[1]: Finished Setup Virtual Console.
Nov 25 17:53:09 np0005535838 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 25 17:53:09 np0005535838 systemd[1]: Starting dracut cmdline hook...
Nov 25 17:53:09 np0005535838 dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Nov 25 17:53:09 np0005535838 dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 17:53:09 np0005535838 systemd[1]: Finished dracut cmdline hook.
Nov 25 17:53:09 np0005535838 systemd[1]: Starting dracut pre-udev hook...
Nov 25 17:53:09 np0005535838 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 25 17:53:09 np0005535838 kernel: device-mapper: uevent: version 1.0.3
Nov 25 17:53:09 np0005535838 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 25 17:53:09 np0005535838 kernel: RPC: Registered named UNIX socket transport module.
Nov 25 17:53:09 np0005535838 kernel: RPC: Registered udp transport module.
Nov 25 17:53:09 np0005535838 kernel: RPC: Registered tcp transport module.
Nov 25 17:53:09 np0005535838 kernel: RPC: Registered tcp-with-tls transport module.
Nov 25 17:53:09 np0005535838 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 25 17:53:09 np0005535838 rpc.statd[442]: Version 2.5.4 starting
Nov 25 17:53:09 np0005535838 rpc.statd[442]: Initializing NSM state
Nov 25 17:53:09 np0005535838 rpc.idmapd[447]: Setting log level to 0
Nov 25 17:53:09 np0005535838 systemd[1]: Finished dracut pre-udev hook.
Nov 25 17:53:09 np0005535838 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 17:53:09 np0005535838 systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 17:53:09 np0005535838 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 17:53:09 np0005535838 systemd[1]: Starting dracut pre-trigger hook...
Nov 25 17:53:09 np0005535838 systemd[1]: Finished dracut pre-trigger hook.
Nov 25 17:53:09 np0005535838 systemd[1]: Starting Coldplug All udev Devices...
Nov 25 17:53:09 np0005535838 systemd[1]: Created slice Slice /system/modprobe.
Nov 25 17:53:09 np0005535838 systemd[1]: Starting Load Kernel Module configfs...
Nov 25 17:53:09 np0005535838 systemd[1]: Finished Coldplug All udev Devices.
Nov 25 17:53:09 np0005535838 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 17:53:09 np0005535838 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 17:53:09 np0005535838 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 17:53:09 np0005535838 systemd[1]: Reached target Network.
Nov 25 17:53:09 np0005535838 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 17:53:09 np0005535838 systemd[1]: Starting dracut initqueue hook...
Nov 25 17:53:09 np0005535838 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 25 17:53:09 np0005535838 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 25 17:53:10 np0005535838 kernel: vda: vda1
Nov 25 17:53:10 np0005535838 kernel: scsi host0: ata_piix
Nov 25 17:53:10 np0005535838 kernel: scsi host1: ata_piix
Nov 25 17:53:10 np0005535838 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 25 17:53:10 np0005535838 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 25 17:53:10 np0005535838 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 25 17:53:10 np0005535838 systemd[1]: Reached target Initrd Root Device.
Nov 25 17:53:10 np0005535838 systemd[1]: Mounting Kernel Configuration File System...
Nov 25 17:53:10 np0005535838 systemd[1]: Mounted Kernel Configuration File System.
Nov 25 17:53:10 np0005535838 kernel: ata1: found unknown device (class 0)
Nov 25 17:53:10 np0005535838 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 25 17:53:10 np0005535838 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 25 17:53:10 np0005535838 systemd[1]: Reached target System Initialization.
Nov 25 17:53:10 np0005535838 systemd[1]: Reached target Basic System.
Nov 25 17:53:10 np0005535838 systemd-udevd[490]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:53:10 np0005535838 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 25 17:53:10 np0005535838 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 25 17:53:10 np0005535838 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 25 17:53:10 np0005535838 systemd[1]: Finished dracut initqueue hook.
Nov 25 17:53:10 np0005535838 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 17:53:10 np0005535838 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 25 17:53:10 np0005535838 systemd[1]: Reached target Remote File Systems.
Nov 25 17:53:10 np0005535838 systemd[1]: Starting dracut pre-mount hook...
Nov 25 17:53:10 np0005535838 systemd[1]: Finished dracut pre-mount hook.
Nov 25 17:53:10 np0005535838 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 25 17:53:10 np0005535838 systemd-fsck[552]: /usr/sbin/fsck.xfs: XFS file system.
Nov 25 17:53:10 np0005535838 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 25 17:53:10 np0005535838 systemd[1]: Mounting /sysroot...
Nov 25 17:53:10 np0005535838 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 25 17:53:10 np0005535838 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 25 17:53:10 np0005535838 kernel: XFS (vda1): Ending clean mount
Nov 25 17:53:10 np0005535838 systemd[1]: Mounted /sysroot.
Nov 25 17:53:10 np0005535838 systemd[1]: Reached target Initrd Root File System.
Nov 25 17:53:10 np0005535838 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 25 17:53:11 np0005535838 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 25 17:53:11 np0005535838 systemd[1]: Reached target Initrd File Systems.
Nov 25 17:53:11 np0005535838 systemd[1]: Reached target Initrd Default Target.
Nov 25 17:53:11 np0005535838 systemd[1]: Starting dracut mount hook...
Nov 25 17:53:11 np0005535838 systemd[1]: Finished dracut mount hook.
Nov 25 17:53:11 np0005535838 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 25 17:53:11 np0005535838 rpc.idmapd[447]: exiting on signal 15
Nov 25 17:53:11 np0005535838 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 25 17:53:11 np0005535838 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Network.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Timer Units.
Nov 25 17:53:11 np0005535838 systemd[1]: dbus.socket: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 25 17:53:11 np0005535838 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Initrd Default Target.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Basic System.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Initrd Root Device.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Initrd /usr File System.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Path Units.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Remote File Systems.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Slice Units.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Socket Units.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target System Initialization.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Local File Systems.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Swaps.
Nov 25 17:53:11 np0005535838 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped dracut mount hook.
Nov 25 17:53:11 np0005535838 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped dracut pre-mount hook.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 25 17:53:11 np0005535838 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 25 17:53:11 np0005535838 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped dracut initqueue hook.
Nov 25 17:53:11 np0005535838 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 17:53:11 np0005535838 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 25 17:53:11 np0005535838 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped Coldplug All udev Devices.
Nov 25 17:53:11 np0005535838 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped dracut pre-trigger hook.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 25 17:53:11 np0005535838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped Setup Virtual Console.
Nov 25 17:53:11 np0005535838 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 25 17:53:11 np0005535838 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Closed udev Control Socket.
Nov 25 17:53:11 np0005535838 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Closed udev Kernel Socket.
Nov 25 17:53:11 np0005535838 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped dracut pre-udev hook.
Nov 25 17:53:11 np0005535838 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped dracut cmdline hook.
Nov 25 17:53:11 np0005535838 systemd[1]: Starting Cleanup udev Database...
Nov 25 17:53:11 np0005535838 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 25 17:53:11 np0005535838 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 25 17:53:11 np0005535838 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Stopped Create System Users.
Nov 25 17:53:11 np0005535838 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 25 17:53:11 np0005535838 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 25 17:53:11 np0005535838 systemd[1]: Finished Cleanup udev Database.
Nov 25 17:53:11 np0005535838 systemd[1]: Reached target Switch Root.
Nov 25 17:53:11 np0005535838 systemd[1]: Starting Switch Root...
Nov 25 17:53:11 np0005535838 systemd[1]: Switching root.
Nov 25 17:53:11 np0005535838 systemd-journald[303]: Received SIGTERM from PID 1 (systemd).
Nov 25 17:53:11 np0005535838 systemd-journald[303]: Journal stopped
Nov 25 17:53:12 np0005535838 kernel: audit: type=1404 audit(1764111191.605:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 25 17:53:12 np0005535838 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 17:53:12 np0005535838 kernel: SELinux:  policy capability open_perms=1
Nov 25 17:53:12 np0005535838 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 17:53:12 np0005535838 kernel: SELinux:  policy capability always_check_network=0
Nov 25 17:53:12 np0005535838 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 17:53:12 np0005535838 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 17:53:12 np0005535838 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 17:53:12 np0005535838 kernel: audit: type=1403 audit(1764111191.753:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 25 17:53:12 np0005535838 systemd: Successfully loaded SELinux policy in 153.539ms.
Nov 25 17:53:12 np0005535838 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.864ms.
Nov 25 17:53:12 np0005535838 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 17:53:12 np0005535838 systemd: Detected virtualization kvm.
Nov 25 17:53:12 np0005535838 systemd: Detected architecture x86-64.
Nov 25 17:53:12 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 17:53:12 np0005535838 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 25 17:53:12 np0005535838 systemd: Stopped Switch Root.
Nov 25 17:53:12 np0005535838 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 25 17:53:12 np0005535838 systemd: Created slice Slice /system/getty.
Nov 25 17:53:12 np0005535838 systemd: Created slice Slice /system/serial-getty.
Nov 25 17:53:12 np0005535838 systemd: Created slice Slice /system/sshd-keygen.
Nov 25 17:53:12 np0005535838 systemd: Created slice User and Session Slice.
Nov 25 17:53:12 np0005535838 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 17:53:12 np0005535838 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 25 17:53:12 np0005535838 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 25 17:53:12 np0005535838 systemd: Reached target Local Encrypted Volumes.
Nov 25 17:53:12 np0005535838 systemd: Stopped target Switch Root.
Nov 25 17:53:12 np0005535838 systemd: Stopped target Initrd File Systems.
Nov 25 17:53:12 np0005535838 systemd: Stopped target Initrd Root File System.
Nov 25 17:53:12 np0005535838 systemd: Reached target Local Integrity Protected Volumes.
Nov 25 17:53:12 np0005535838 systemd: Reached target Path Units.
Nov 25 17:53:12 np0005535838 systemd: Reached target rpc_pipefs.target.
Nov 25 17:53:12 np0005535838 systemd: Reached target Slice Units.
Nov 25 17:53:12 np0005535838 systemd: Reached target Swaps.
Nov 25 17:53:12 np0005535838 systemd: Reached target Local Verity Protected Volumes.
Nov 25 17:53:12 np0005535838 systemd: Listening on RPCbind Server Activation Socket.
Nov 25 17:53:12 np0005535838 systemd: Reached target RPC Port Mapper.
Nov 25 17:53:12 np0005535838 systemd: Listening on Process Core Dump Socket.
Nov 25 17:53:12 np0005535838 systemd: Listening on initctl Compatibility Named Pipe.
Nov 25 17:53:12 np0005535838 systemd: Listening on udev Control Socket.
Nov 25 17:53:12 np0005535838 systemd: Listening on udev Kernel Socket.
Nov 25 17:53:12 np0005535838 systemd: Mounting Huge Pages File System...
Nov 25 17:53:12 np0005535838 systemd: Mounting POSIX Message Queue File System...
Nov 25 17:53:12 np0005535838 systemd: Mounting Kernel Debug File System...
Nov 25 17:53:12 np0005535838 systemd: Mounting Kernel Trace File System...
Nov 25 17:53:12 np0005535838 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 17:53:12 np0005535838 systemd: Starting Create List of Static Device Nodes...
Nov 25 17:53:12 np0005535838 systemd: Starting Load Kernel Module configfs...
Nov 25 17:53:12 np0005535838 systemd: Starting Load Kernel Module drm...
Nov 25 17:53:12 np0005535838 systemd: Starting Load Kernel Module efi_pstore...
Nov 25 17:53:12 np0005535838 systemd: Starting Load Kernel Module fuse...
Nov 25 17:53:12 np0005535838 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 25 17:53:12 np0005535838 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 25 17:53:12 np0005535838 systemd: Stopped File System Check on Root Device.
Nov 25 17:53:12 np0005535838 systemd: Stopped Journal Service.
Nov 25 17:53:12 np0005535838 systemd: Starting Journal Service...
Nov 25 17:53:12 np0005535838 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 17:53:12 np0005535838 systemd: Starting Generate network units from Kernel command line...
Nov 25 17:53:12 np0005535838 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 17:53:12 np0005535838 systemd: Starting Remount Root and Kernel File Systems...
Nov 25 17:53:12 np0005535838 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 25 17:53:12 np0005535838 systemd: Starting Apply Kernel Variables...
Nov 25 17:53:12 np0005535838 systemd-journald[675]: Journal started
Nov 25 17:53:12 np0005535838 systemd-journald[675]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 25 17:53:12 np0005535838 systemd[1]: Queued start job for default target Multi-User System.
Nov 25 17:53:12 np0005535838 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 25 17:53:12 np0005535838 kernel: fuse: init (API version 7.37)
Nov 25 17:53:12 np0005535838 systemd: Starting Coldplug All udev Devices...
Nov 25 17:53:12 np0005535838 systemd: Started Journal Service.
Nov 25 17:53:12 np0005535838 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 25 17:53:12 np0005535838 systemd[1]: Mounted Huge Pages File System.
Nov 25 17:53:12 np0005535838 systemd[1]: Mounted POSIX Message Queue File System.
Nov 25 17:53:12 np0005535838 systemd[1]: Mounted Kernel Debug File System.
Nov 25 17:53:12 np0005535838 systemd[1]: Mounted Kernel Trace File System.
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 17:53:12 np0005535838 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 17:53:12 np0005535838 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 25 17:53:12 np0005535838 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Load Kernel Module fuse.
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Generate network units from Kernel command line.
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Apply Kernel Variables.
Nov 25 17:53:12 np0005535838 kernel: ACPI: bus type drm_connector registered
Nov 25 17:53:12 np0005535838 systemd[1]: Mounting FUSE Control File System...
Nov 25 17:53:12 np0005535838 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 17:53:12 np0005535838 systemd[1]: Starting Rebuild Hardware Database...
Nov 25 17:53:12 np0005535838 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 25 17:53:12 np0005535838 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 25 17:53:12 np0005535838 systemd[1]: Starting Load/Save OS Random Seed...
Nov 25 17:53:12 np0005535838 systemd[1]: Starting Create System Users...
Nov 25 17:53:12 np0005535838 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Load Kernel Module drm.
Nov 25 17:53:12 np0005535838 systemd-journald[675]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 25 17:53:12 np0005535838 systemd-journald[675]: Received client request to flush runtime journal.
Nov 25 17:53:12 np0005535838 systemd[1]: Mounted FUSE Control File System.
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Load/Save OS Random Seed.
Nov 25 17:53:12 np0005535838 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Create System Users.
Nov 25 17:53:12 np0005535838 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Coldplug All udev Devices.
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 17:53:12 np0005535838 systemd[1]: Reached target Preparation for Local File Systems.
Nov 25 17:53:12 np0005535838 systemd[1]: Reached target Local File Systems.
Nov 25 17:53:12 np0005535838 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 25 17:53:12 np0005535838 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 25 17:53:12 np0005535838 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 25 17:53:12 np0005535838 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 25 17:53:12 np0005535838 systemd[1]: Starting Automatic Boot Loader Update...
Nov 25 17:53:12 np0005535838 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 25 17:53:12 np0005535838 systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 17:53:12 np0005535838 bootctl[692]: Couldn't find EFI system partition, skipping.
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Automatic Boot Loader Update.
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 17:53:12 np0005535838 systemd[1]: Starting Security Auditing Service...
Nov 25 17:53:12 np0005535838 systemd[1]: Starting RPC Bind...
Nov 25 17:53:12 np0005535838 systemd[1]: Starting Rebuild Journal Catalog...
Nov 25 17:53:12 np0005535838 auditd[698]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 25 17:53:12 np0005535838 auditd[698]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Rebuild Journal Catalog.
Nov 25 17:53:12 np0005535838 systemd[1]: Started RPC Bind.
Nov 25 17:53:12 np0005535838 augenrules[703]: /sbin/augenrules: No change
Nov 25 17:53:12 np0005535838 augenrules[718]: No rules
Nov 25 17:53:12 np0005535838 augenrules[718]: enabled 1
Nov 25 17:53:12 np0005535838 augenrules[718]: failure 1
Nov 25 17:53:12 np0005535838 augenrules[718]: pid 698
Nov 25 17:53:12 np0005535838 augenrules[718]: rate_limit 0
Nov 25 17:53:12 np0005535838 augenrules[718]: backlog_limit 8192
Nov 25 17:53:12 np0005535838 augenrules[718]: lost 0
Nov 25 17:53:12 np0005535838 augenrules[718]: backlog 4
Nov 25 17:53:12 np0005535838 augenrules[718]: backlog_wait_time 60000
Nov 25 17:53:12 np0005535838 augenrules[718]: backlog_wait_time_actual 0
Nov 25 17:53:12 np0005535838 augenrules[718]: enabled 1
Nov 25 17:53:12 np0005535838 augenrules[718]: failure 1
Nov 25 17:53:12 np0005535838 augenrules[718]: pid 698
Nov 25 17:53:12 np0005535838 augenrules[718]: rate_limit 0
Nov 25 17:53:12 np0005535838 augenrules[718]: backlog_limit 8192
Nov 25 17:53:12 np0005535838 augenrules[718]: lost 0
Nov 25 17:53:12 np0005535838 augenrules[718]: backlog 4
Nov 25 17:53:12 np0005535838 augenrules[718]: backlog_wait_time 60000
Nov 25 17:53:12 np0005535838 augenrules[718]: backlog_wait_time_actual 0
Nov 25 17:53:12 np0005535838 augenrules[718]: enabled 1
Nov 25 17:53:12 np0005535838 augenrules[718]: failure 1
Nov 25 17:53:12 np0005535838 augenrules[718]: pid 698
Nov 25 17:53:12 np0005535838 augenrules[718]: rate_limit 0
Nov 25 17:53:12 np0005535838 augenrules[718]: backlog_limit 8192
Nov 25 17:53:12 np0005535838 augenrules[718]: lost 0
Nov 25 17:53:12 np0005535838 augenrules[718]: backlog 4
Nov 25 17:53:12 np0005535838 augenrules[718]: backlog_wait_time 60000
Nov 25 17:53:12 np0005535838 augenrules[718]: backlog_wait_time_actual 0
Nov 25 17:53:12 np0005535838 systemd[1]: Started Security Auditing Service.
Nov 25 17:53:12 np0005535838 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 25 17:53:12 np0005535838 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 25 17:53:13 np0005535838 systemd[1]: Finished Rebuild Hardware Database.
Nov 25 17:53:13 np0005535838 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 17:53:13 np0005535838 systemd-udevd[726]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 17:53:13 np0005535838 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 25 17:53:13 np0005535838 systemd[1]: Starting Update is Completed...
Nov 25 17:53:13 np0005535838 systemd[1]: Finished Update is Completed.
Nov 25 17:53:13 np0005535838 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 17:53:13 np0005535838 systemd[1]: Reached target System Initialization.
Nov 25 17:53:13 np0005535838 systemd[1]: Started dnf makecache --timer.
Nov 25 17:53:13 np0005535838 systemd[1]: Started Daily rotation of log files.
Nov 25 17:53:13 np0005535838 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 25 17:53:13 np0005535838 systemd[1]: Reached target Timer Units.
Nov 25 17:53:13 np0005535838 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 17:53:13 np0005535838 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 25 17:53:13 np0005535838 systemd[1]: Reached target Socket Units.
Nov 25 17:53:13 np0005535838 systemd[1]: Starting D-Bus System Message Bus...
Nov 25 17:53:13 np0005535838 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 17:53:13 np0005535838 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 25 17:53:13 np0005535838 systemd[1]: Starting Load Kernel Module configfs...
Nov 25 17:53:13 np0005535838 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 17:53:13 np0005535838 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 17:53:13 np0005535838 systemd-udevd[747]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:53:13 np0005535838 systemd[1]: Started D-Bus System Message Bus.
Nov 25 17:53:13 np0005535838 systemd[1]: Reached target Basic System.
Nov 25 17:53:13 np0005535838 dbus-broker-lau[758]: Ready
Nov 25 17:53:13 np0005535838 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 25 17:53:13 np0005535838 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 25 17:53:13 np0005535838 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 25 17:53:13 np0005535838 systemd[1]: Starting NTP client/server...
Nov 25 17:53:13 np0005535838 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 25 17:53:13 np0005535838 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 25 17:53:13 np0005535838 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 25 17:53:13 np0005535838 systemd[1]: Starting IPv4 firewall with iptables...
Nov 25 17:53:13 np0005535838 systemd[1]: Started irqbalance daemon.
Nov 25 17:53:13 np0005535838 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 25 17:53:13 np0005535838 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 17:53:13 np0005535838 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 17:53:13 np0005535838 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 17:53:13 np0005535838 systemd[1]: Reached target sshd-keygen.target.
Nov 25 17:53:13 np0005535838 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 25 17:53:13 np0005535838 systemd[1]: Reached target User and Group Name Lookups.
Nov 25 17:53:13 np0005535838 chronyd[791]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 17:53:13 np0005535838 systemd[1]: Starting User Login Management...
Nov 25 17:53:13 np0005535838 chronyd[791]: Loaded 0 symmetric keys
Nov 25 17:53:13 np0005535838 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 25 17:53:13 np0005535838 chronyd[791]: Using right/UTC timezone to obtain leap second data
Nov 25 17:53:13 np0005535838 chronyd[791]: Loaded seccomp filter (level 2)
Nov 25 17:53:13 np0005535838 systemd[1]: Started NTP client/server.
Nov 25 17:53:13 np0005535838 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 25 17:53:13 np0005535838 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 25 17:53:13 np0005535838 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 25 17:53:13 np0005535838 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 25 17:53:13 np0005535838 kernel: kvm_amd: TSC scaling supported
Nov 25 17:53:13 np0005535838 kernel: kvm_amd: Nested Virtualization enabled
Nov 25 17:53:13 np0005535838 kernel: kvm_amd: Nested Paging enabled
Nov 25 17:53:13 np0005535838 kernel: kvm_amd: LBR virtualization supported
Nov 25 17:53:13 np0005535838 kernel: Console: switching to colour dummy device 80x25
Nov 25 17:53:13 np0005535838 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 25 17:53:13 np0005535838 kernel: [drm] features: -context_init
Nov 25 17:53:13 np0005535838 kernel: [drm] number of scanouts: 1
Nov 25 17:53:13 np0005535838 kernel: [drm] number of cap sets: 0
Nov 25 17:53:13 np0005535838 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 25 17:53:13 np0005535838 systemd-logind[789]: New seat seat0.
Nov 25 17:53:13 np0005535838 systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 17:53:13 np0005535838 systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 17:53:13 np0005535838 systemd[1]: Started User Login Management.
Nov 25 17:53:13 np0005535838 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 25 17:53:13 np0005535838 kernel: Console: switching to colour frame buffer device 128x48
Nov 25 17:53:13 np0005535838 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 25 17:53:13 np0005535838 iptables.init[781]: iptables: Applying firewall rules: [  OK  ]
Nov 25 17:53:13 np0005535838 systemd[1]: Finished IPv4 firewall with iptables.
Nov 25 17:53:14 np0005535838 cloud-init[837]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 25 Nov 2025 22:53:14 +0000. Up 6.87 seconds.
Nov 25 17:53:14 np0005535838 systemd[1]: run-cloud\x2dinit-tmp-tmppsbkfyv6.mount: Deactivated successfully.
Nov 25 17:53:14 np0005535838 systemd[1]: Starting Hostname Service...
Nov 25 17:53:14 np0005535838 systemd[1]: Started Hostname Service.
Nov 25 17:53:14 np0005535838 systemd-hostnamed[851]: Hostname set to <np0005535838.novalocal> (static)
Nov 25 17:53:14 np0005535838 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 25 17:53:14 np0005535838 systemd[1]: Reached target Preparation for Network.
Nov 25 17:53:14 np0005535838 systemd[1]: Starting Network Manager...
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8034] NetworkManager (version 1.54.1-1.el9) is starting... (boot:3edafa6c-db49-405c-9758-42faad226154)
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8039] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8192] manager[0x55b627b65080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8249] hostname: hostname: using hostnamed
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8250] hostname: static hostname changed from (none) to "np0005535838.novalocal"
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8255] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8397] manager[0x55b627b65080]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8397] manager[0x55b627b65080]: rfkill: WWAN hardware radio set enabled
Nov 25 17:53:14 np0005535838 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8490] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8490] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8491] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8492] manager: Networking is enabled by state file
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8494] settings: Loaded settings plugin: keyfile (internal)
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8524] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8552] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8581] dhcp: init: Using DHCP client 'internal'
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8584] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8598] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8611] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8619] device (lo): Activation: starting connection 'lo' (8a2e98f0-f5c9-4e09-92f1-2bf0997fed4f)
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8630] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8633] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8664] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8669] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8672] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8674] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8677] device (eth0): carrier: link connected
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8682] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8689] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8696] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 17:53:14 np0005535838 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8700] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8701] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8703] manager: NetworkManager state is now CONNECTING
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8704] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8712] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8715] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 17:53:14 np0005535838 systemd[1]: Started Network Manager.
Nov 25 17:53:14 np0005535838 systemd[1]: Reached target Network.
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8762] dhcp4 (eth0): state changed new lease, address=38.102.83.77
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8772] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8792] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 17:53:14 np0005535838 systemd[1]: Starting Network Manager Wait Online...
Nov 25 17:53:14 np0005535838 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 25 17:53:14 np0005535838 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8909] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8912] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8919] device (lo): Activation: successful, device activated.
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8926] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8928] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8932] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8936] device (eth0): Activation: successful, device activated.
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8943] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 17:53:14 np0005535838 NetworkManager[855]: <info>  [1764111194.8946] manager: startup complete
Nov 25 17:53:14 np0005535838 systemd[1]: Finished Network Manager Wait Online.
Nov 25 17:53:14 np0005535838 systemd[1]: Starting Cloud-init: Network Stage...
Nov 25 17:53:14 np0005535838 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 25 17:53:14 np0005535838 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 17:53:14 np0005535838 systemd[1]: Reached target NFS client services.
Nov 25 17:53:14 np0005535838 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 17:53:14 np0005535838 systemd[1]: Reached target Remote File Systems.
Nov 25 17:53:14 np0005535838 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 17:53:15 np0005535838 cloud-init[918]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 25 Nov 2025 22:53:15 +0000. Up 7.91 seconds.
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: |  eth0  | True |         38.102.83.77         | 255.255.255.0 | global | fa:16:3e:1e:be:4b |
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:fe1e:be4b/64 |       .       |  link  | fa:16:3e:1e:be:4b |
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 25 17:53:15 np0005535838 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 17:53:16 np0005535838 cloud-init[918]: Generating public/private rsa key pair.
Nov 25 17:53:16 np0005535838 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 25 17:53:16 np0005535838 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 25 17:53:16 np0005535838 cloud-init[918]: The key fingerprint is:
Nov 25 17:53:16 np0005535838 cloud-init[918]: SHA256:uEpSwMNUTYaT/3AuVFyqJzt6B7TTBd/+gaZ64cpg6yc root@np0005535838.novalocal
Nov 25 17:53:16 np0005535838 cloud-init[918]: The key's randomart image is:
Nov 25 17:53:16 np0005535838 cloud-init[918]: +---[RSA 3072]----+
Nov 25 17:53:16 np0005535838 cloud-init[918]: |  ...=o. ..      |
Nov 25 17:53:16 np0005535838 cloud-init[918]: | +  +.. +.       |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |  =  o ..o .     |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |   o  =o. o .    |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |    .o+BS. . .   |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |   .  ==+ . + .  |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |  . . *+ . + . . |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |   o +E=o +   .  |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |    ooo+++       |
Nov 25 17:53:16 np0005535838 cloud-init[918]: +----[SHA256]-----+
Nov 25 17:53:16 np0005535838 cloud-init[918]: Generating public/private ecdsa key pair.
Nov 25 17:53:16 np0005535838 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 25 17:53:16 np0005535838 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 25 17:53:16 np0005535838 cloud-init[918]: The key fingerprint is:
Nov 25 17:53:16 np0005535838 cloud-init[918]: SHA256:kISfC0IPO3Dpvg8AVjVav5bdtdd7Gtun1kd8uYY1YAo root@np0005535838.novalocal
Nov 25 17:53:16 np0005535838 cloud-init[918]: The key's randomart image is:
Nov 25 17:53:16 np0005535838 cloud-init[918]: +---[ECDSA 256]---+
Nov 25 17:53:16 np0005535838 cloud-init[918]: |   o.=.          |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |. * +.o.         |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |.* = .oo     .   |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |o = o o.+E. .o. .|
Nov 25 17:53:16 np0005535838 cloud-init[918]: |.. o . =S...o..oo|
Nov 25 17:53:16 np0005535838 cloud-init[918]: | ..   o    .  .+=|
Nov 25 17:53:16 np0005535838 cloud-init[918]: |  ..          ++=|
Nov 25 17:53:16 np0005535838 cloud-init[918]: |  ..         ..B=|
Nov 25 17:53:16 np0005535838 cloud-init[918]: |   ..        .=.+|
Nov 25 17:53:16 np0005535838 cloud-init[918]: +----[SHA256]-----+
Nov 25 17:53:16 np0005535838 cloud-init[918]: Generating public/private ed25519 key pair.
Nov 25 17:53:16 np0005535838 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 25 17:53:16 np0005535838 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 25 17:53:16 np0005535838 cloud-init[918]: The key fingerprint is:
Nov 25 17:53:16 np0005535838 cloud-init[918]: SHA256:gnRg88rLdICZl6A/nNSr5lFwcw6juAeEoZujCKJjfIQ root@np0005535838.novalocal
Nov 25 17:53:16 np0005535838 cloud-init[918]: The key's randomart image is:
Nov 25 17:53:16 np0005535838 cloud-init[918]: +--[ED25519 256]--+
Nov 25 17:53:16 np0005535838 cloud-init[918]: |. . +            |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |oo B =           |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |+.* @ +          |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |.B.O %           |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |OE*.B + S        |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |B+.* o .         |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |*o=.o            |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |.=..             |
Nov 25 17:53:16 np0005535838 cloud-init[918]: |  .              |
Nov 25 17:53:16 np0005535838 cloud-init[918]: +----[SHA256]-----+
Nov 25 17:53:16 np0005535838 systemd[1]: Finished Cloud-init: Network Stage.
Nov 25 17:53:16 np0005535838 systemd[1]: Reached target Cloud-config availability.
Nov 25 17:53:16 np0005535838 systemd[1]: Reached target Network is Online.
Nov 25 17:53:16 np0005535838 systemd[1]: Starting Cloud-init: Config Stage...
Nov 25 17:53:16 np0005535838 systemd[1]: Starting Crash recovery kernel arming...
Nov 25 17:53:16 np0005535838 systemd[1]: Starting Notify NFS peers of a restart...
Nov 25 17:53:16 np0005535838 systemd[1]: Starting System Logging Service...
Nov 25 17:53:16 np0005535838 systemd[1]: Starting OpenSSH server daemon...
Nov 25 17:53:16 np0005535838 sm-notify[1000]: Version 2.5.4 starting
Nov 25 17:53:16 np0005535838 systemd[1]: Starting Permit User Sessions...
Nov 25 17:53:16 np0005535838 systemd[1]: Started Notify NFS peers of a restart.
Nov 25 17:53:16 np0005535838 systemd[1]: Finished Permit User Sessions.
Nov 25 17:53:16 np0005535838 systemd[1]: Started Command Scheduler.
Nov 25 17:53:16 np0005535838 systemd[1]: Started Getty on tty1.
Nov 25 17:53:16 np0005535838 systemd[1]: Started Serial Getty on ttyS0.
Nov 25 17:53:16 np0005535838 systemd[1]: Reached target Login Prompts.
Nov 25 17:53:16 np0005535838 systemd[1]: Started OpenSSH server daemon.
Nov 25 17:53:16 np0005535838 systemd[1]: Started System Logging Service.
Nov 25 17:53:16 np0005535838 rsyslogd[1001]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1001" x-info="https://www.rsyslog.com"] start
Nov 25 17:53:16 np0005535838 rsyslogd[1001]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 25 17:53:16 np0005535838 systemd[1]: Reached target Multi-User System.
Nov 25 17:53:16 np0005535838 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 25 17:53:16 np0005535838 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 25 17:53:16 np0005535838 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 25 17:53:16 np0005535838 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 17:53:16 np0005535838 kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Nov 25 17:53:16 np0005535838 kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 25 17:53:17 np0005535838 cloud-init[1122]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 25 Nov 2025 22:53:17 +0000. Up 9.75 seconds.
Nov 25 17:53:17 np0005535838 systemd[1]: Finished Cloud-init: Config Stage.
Nov 25 17:53:17 np0005535838 systemd[1]: Starting Cloud-init: Final Stage...
Nov 25 17:53:17 np0005535838 cloud-init[1278]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 25 Nov 2025 22:53:17 +0000. Up 10.14 seconds.
Nov 25 17:53:17 np0005535838 dracut[1283]: dracut-057-102.git20250818.el9
Nov 25 17:53:17 np0005535838 cloud-init[1292]: #############################################################
Nov 25 17:53:17 np0005535838 cloud-init[1299]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 25 17:53:17 np0005535838 cloud-init[1303]: 256 SHA256:kISfC0IPO3Dpvg8AVjVav5bdtdd7Gtun1kd8uYY1YAo root@np0005535838.novalocal (ECDSA)
Nov 25 17:53:17 np0005535838 cloud-init[1305]: 256 SHA256:gnRg88rLdICZl6A/nNSr5lFwcw6juAeEoZujCKJjfIQ root@np0005535838.novalocal (ED25519)
Nov 25 17:53:17 np0005535838 cloud-init[1307]: 3072 SHA256:uEpSwMNUTYaT/3AuVFyqJzt6B7TTBd/+gaZ64cpg6yc root@np0005535838.novalocal (RSA)
Nov 25 17:53:17 np0005535838 cloud-init[1308]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 25 17:53:17 np0005535838 cloud-init[1309]: #############################################################
Nov 25 17:53:17 np0005535838 cloud-init[1278]: Cloud-init v. 24.4-7.el9 finished at Tue, 25 Nov 2025 22:53:17 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.38 seconds
Nov 25 17:53:17 np0005535838 dracut[1285]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 25 17:53:17 np0005535838 systemd[1]: Finished Cloud-init: Final Stage.
Nov 25 17:53:17 np0005535838 systemd[1]: Reached target Cloud-init target.
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 17:53:18 np0005535838 dracut[1285]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: memstrack is not available
Nov 25 17:53:19 np0005535838 dracut[1285]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 17:53:19 np0005535838 dracut[1285]: memstrack is not available
Nov 25 17:53:19 np0005535838 dracut[1285]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 17:53:19 np0005535838 dracut[1285]: *** Including module: systemd ***
Nov 25 17:53:19 np0005535838 chronyd[791]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Nov 25 17:53:19 np0005535838 chronyd[791]: System clock TAI offset set to 37 seconds
Nov 25 17:53:20 np0005535838 dracut[1285]: *** Including module: fips ***
Nov 25 17:53:20 np0005535838 dracut[1285]: *** Including module: systemd-initrd ***
Nov 25 17:53:20 np0005535838 dracut[1285]: *** Including module: i18n ***
Nov 25 17:53:20 np0005535838 dracut[1285]: *** Including module: drm ***
Nov 25 17:53:21 np0005535838 dracut[1285]: *** Including module: prefixdevname ***
Nov 25 17:53:21 np0005535838 dracut[1285]: *** Including module: kernel-modules ***
Nov 25 17:53:21 np0005535838 kernel: block vda: the capability attribute has been deprecated.
Nov 25 17:53:22 np0005535838 dracut[1285]: *** Including module: kernel-modules-extra ***
Nov 25 17:53:22 np0005535838 dracut[1285]: *** Including module: qemu ***
Nov 25 17:53:22 np0005535838 dracut[1285]: *** Including module: fstab-sys ***
Nov 25 17:53:22 np0005535838 dracut[1285]: *** Including module: rootfs-block ***
Nov 25 17:53:22 np0005535838 dracut[1285]: *** Including module: terminfo ***
Nov 25 17:53:22 np0005535838 dracut[1285]: *** Including module: udev-rules ***
Nov 25 17:53:23 np0005535838 dracut[1285]: Skipping udev rule: 91-permissions.rules
Nov 25 17:53:23 np0005535838 dracut[1285]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 25 17:53:23 np0005535838 dracut[1285]: *** Including module: virtiofs ***
Nov 25 17:53:23 np0005535838 dracut[1285]: *** Including module: dracut-systemd ***
Nov 25 17:53:23 np0005535838 irqbalance[783]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 25 17:53:23 np0005535838 irqbalance[783]: IRQ 25 affinity is now unmanaged
Nov 25 17:53:23 np0005535838 irqbalance[783]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 25 17:53:23 np0005535838 irqbalance[783]: IRQ 31 affinity is now unmanaged
Nov 25 17:53:23 np0005535838 irqbalance[783]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 25 17:53:23 np0005535838 irqbalance[783]: IRQ 28 affinity is now unmanaged
Nov 25 17:53:23 np0005535838 irqbalance[783]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 25 17:53:23 np0005535838 irqbalance[783]: IRQ 32 affinity is now unmanaged
Nov 25 17:53:23 np0005535838 irqbalance[783]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 25 17:53:23 np0005535838 irqbalance[783]: IRQ 30 affinity is now unmanaged
Nov 25 17:53:23 np0005535838 irqbalance[783]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 25 17:53:23 np0005535838 irqbalance[783]: IRQ 29 affinity is now unmanaged
Nov 25 17:53:23 np0005535838 dracut[1285]: *** Including module: usrmount ***
Nov 25 17:53:23 np0005535838 dracut[1285]: *** Including module: base ***
Nov 25 17:53:23 np0005535838 dracut[1285]: *** Including module: fs-lib ***
Nov 25 17:53:23 np0005535838 dracut[1285]: *** Including module: kdumpbase ***
Nov 25 17:53:24 np0005535838 dracut[1285]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 25 17:53:24 np0005535838 dracut[1285]:  microcode_ctl module: mangling fw_dir
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: configuration "intel" is ignored
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 25 17:53:24 np0005535838 dracut[1285]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 25 17:53:24 np0005535838 dracut[1285]: *** Including module: openssl ***
Nov 25 17:53:24 np0005535838 dracut[1285]: *** Including module: shutdown ***
Nov 25 17:53:24 np0005535838 dracut[1285]: *** Including module: squash ***
Nov 25 17:53:24 np0005535838 dracut[1285]: *** Including modules done ***
Nov 25 17:53:24 np0005535838 dracut[1285]: *** Installing kernel module dependencies ***
Nov 25 17:53:25 np0005535838 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 17:53:25 np0005535838 dracut[1285]: *** Installing kernel module dependencies done ***
Nov 25 17:53:25 np0005535838 dracut[1285]: *** Resolving executable dependencies ***
Nov 25 17:53:27 np0005535838 dracut[1285]: *** Resolving executable dependencies done ***
Nov 25 17:53:27 np0005535838 dracut[1285]: *** Generating early-microcode cpio image ***
Nov 25 17:53:27 np0005535838 dracut[1285]: *** Store current command line parameters ***
Nov 25 17:53:27 np0005535838 dracut[1285]: Stored kernel commandline:
Nov 25 17:53:27 np0005535838 dracut[1285]: No dracut internal kernel commandline stored in the initramfs
Nov 25 17:53:28 np0005535838 dracut[1285]: *** Install squash loader ***
Nov 25 17:53:29 np0005535838 dracut[1285]: *** Squashing the files inside the initramfs ***
Nov 25 17:53:30 np0005535838 dracut[1285]: *** Squashing the files inside the initramfs done ***
Nov 25 17:53:30 np0005535838 dracut[1285]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 25 17:53:30 np0005535838 dracut[1285]: *** Hardlinking files ***
Nov 25 17:53:30 np0005535838 dracut[1285]: *** Hardlinking files done ***
Nov 25 17:53:30 np0005535838 dracut[1285]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 25 17:53:31 np0005535838 kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Nov 25 17:53:31 np0005535838 kdumpctl[1015]: kdump: Starting kdump: [OK]
Nov 25 17:53:31 np0005535838 systemd[1]: Finished Crash recovery kernel arming.
Nov 25 17:53:31 np0005535838 systemd[1]: Startup finished in 1.611s (kernel) + 2.700s (initrd) + 19.819s (userspace) = 24.130s.
Nov 25 17:53:44 np0005535838 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 17:54:26 np0005535838 chronyd[791]: Selected source 206.108.0.133 (2.centos.pool.ntp.org)
Nov 25 17:55:13 np0005535838 systemd[1]: Created slice User Slice of UID 1000.
Nov 25 17:55:13 np0005535838 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 25 17:55:13 np0005535838 systemd-logind[789]: New session 1 of user zuul.
Nov 25 17:55:13 np0005535838 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 25 17:55:13 np0005535838 systemd[1]: Starting User Manager for UID 1000...
Nov 25 17:55:13 np0005535838 systemd[4299]: Queued start job for default target Main User Target.
Nov 25 17:55:13 np0005535838 systemd[4299]: Created slice User Application Slice.
Nov 25 17:55:13 np0005535838 systemd[4299]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 17:55:13 np0005535838 systemd[4299]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 17:55:13 np0005535838 systemd[4299]: Reached target Paths.
Nov 25 17:55:13 np0005535838 systemd[4299]: Reached target Timers.
Nov 25 17:55:13 np0005535838 systemd[4299]: Starting D-Bus User Message Bus Socket...
Nov 25 17:55:13 np0005535838 systemd[4299]: Starting Create User's Volatile Files and Directories...
Nov 25 17:55:13 np0005535838 systemd[4299]: Finished Create User's Volatile Files and Directories.
Nov 25 17:55:13 np0005535838 systemd[4299]: Listening on D-Bus User Message Bus Socket.
Nov 25 17:55:13 np0005535838 systemd[4299]: Reached target Sockets.
Nov 25 17:55:13 np0005535838 systemd[4299]: Reached target Basic System.
Nov 25 17:55:13 np0005535838 systemd[4299]: Reached target Main User Target.
Nov 25 17:55:13 np0005535838 systemd[4299]: Startup finished in 149ms.
Nov 25 17:55:13 np0005535838 systemd[1]: Started User Manager for UID 1000.
Nov 25 17:55:13 np0005535838 systemd[1]: Started Session 1 of User zuul.
Nov 25 17:55:13 np0005535838 python3[4381]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 17:55:16 np0005535838 python3[4409]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 17:55:22 np0005535838 python3[4467]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 17:55:23 np0005535838 python3[4507]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 25 17:55:25 np0005535838 python3[4533]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmokCZV1ohA/dBgPDQFwsEwGi7bp67XDWUBg6xHh4L1zLa2Z9jcISwU5X/eepX1jQEEUGEzBp1VGdiXF/CmVDSdK2t37ngENzNgsU4CiBjcbHIylEOzktW7a/NZ44lwVzaiop/lmFbnQWjWk/Z2FH7jlY1Gl9SNoM1knAZtsnTr8ciuHNWq+P4NbgWh1dhyVXLPtRk4OiQ/byAY3BXNE3XpwaJBCf5ESIBnUO9LMxbyroV1fA1HPEtehd/9n4SyomvHTdWzGApF7Swo0B4uhR2HP56EvPAfPSHZ/t0HUl7tjAxbzVxa1poJDIRJXaWqbnM9n+yHk3lrw+aLZ8ETpr8JEai4EjgxJjjU6ePJsZYDEmVPHEABhh5cm14mtT+Hs83k1jRcNNM5jNKXf/I6tlvaS++xXjg+QbQs8yVQ+A+dV7OFyUSIbeYqIXsUIux5gFT9Yt9HHi4e4AkEdU2fxRyPg4ekUdVmcaLeTk7RacG1bNcez9g7EP2zg3IVUuu6zk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:25 np0005535838 python3[4557]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:26 np0005535838 python3[4656]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 17:55:26 np0005535838 python3[4727]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764111326.0321875-207-170237529251704/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=0ef163a185774c22975fb6ea7653e91f_id_rsa follow=False checksum=07736deb23d8e4b2f267fc54a7c47707037310e8 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:27 np0005535838 python3[4850]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 17:55:28 np0005535838 python3[4921]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764111327.3414114-240-182833777339376/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=0ef163a185774c22975fb6ea7653e91f_id_rsa.pub follow=False checksum=2b051e17575fcad56bb6c6937fd1a45598225ee2 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:30 np0005535838 python3[4969]: ansible-ping Invoked with data=pong
Nov 25 17:55:31 np0005535838 python3[4993]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 17:55:32 np0005535838 python3[5051]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 25 17:55:33 np0005535838 python3[5083]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:33 np0005535838 python3[5107]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:34 np0005535838 python3[5131]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:34 np0005535838 python3[5155]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:34 np0005535838 python3[5179]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:35 np0005535838 python3[5203]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:36 np0005535838 python3[5229]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:37 np0005535838 python3[5307]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 17:55:37 np0005535838 python3[5380]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764111336.7685905-21-41165948861193/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:38 np0005535838 python3[5428]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:38 np0005535838 python3[5452]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:38 np0005535838 python3[5476]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:39 np0005535838 python3[5500]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:39 np0005535838 python3[5524]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:39 np0005535838 python3[5548]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:40 np0005535838 python3[5572]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:40 np0005535838 python3[5596]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:40 np0005535838 python3[5620]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:41 np0005535838 python3[5644]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:41 np0005535838 python3[5668]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:41 np0005535838 python3[5692]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:41 np0005535838 python3[5716]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:42 np0005535838 python3[5740]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:42 np0005535838 python3[5764]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:42 np0005535838 python3[5788]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:43 np0005535838 python3[5812]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:43 np0005535838 python3[5836]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:43 np0005535838 python3[5860]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:43 np0005535838 python3[5884]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:44 np0005535838 python3[5908]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:44 np0005535838 python3[5932]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:44 np0005535838 python3[5956]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:45 np0005535838 python3[5980]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:45 np0005535838 python3[6004]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:45 np0005535838 python3[6028]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 17:55:48 np0005535838 python3[6054]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 17:55:48 np0005535838 systemd[1]: Starting Time & Date Service...
Nov 25 17:55:48 np0005535838 systemd[1]: Started Time & Date Service.
Nov 25 17:55:48 np0005535838 systemd-timedated[6056]: Changed time zone to 'UTC' (UTC).
Nov 25 17:55:48 np0005535838 python3[6085]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:49 np0005535838 python3[6161]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 17:55:49 np0005535838 python3[6232]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764111349.067419-153-185972815722863/source _original_basename=tmp93__27ds follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:50 np0005535838 python3[6332]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 17:55:50 np0005535838 python3[6403]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764111350.0090945-183-250604696236522/source _original_basename=tmpps097i7u follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:51 np0005535838 python3[6505]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 17:55:51 np0005535838 python3[6578]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764111351.129548-231-81003696640765/source _original_basename=tmp6iqmex1w follow=False checksum=b5d32a20a180d280e10f96c8ee4e4addc6022f99 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:52 np0005535838 python3[6626]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 17:55:52 np0005535838 python3[6652]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 17:55:53 np0005535838 python3[6732]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 17:55:53 np0005535838 python3[6805]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764111352.9069505-273-148198345768801/source _original_basename=tmpbvvfsosz follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:55:54 np0005535838 python3[6856]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-8aa9-b87e-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 17:55:54 np0005535838 python3[6884]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8aa9-b87e-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 25 17:55:56 np0005535838 python3[6912]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:56:13 np0005535838 python3[6938]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:56:18 np0005535838 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 17:56:47 np0005535838 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 17:56:47 np0005535838 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 25 17:56:47 np0005535838 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 25 17:56:47 np0005535838 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 25 17:56:47 np0005535838 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 25 17:56:47 np0005535838 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 25 17:56:47 np0005535838 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 25 17:56:47 np0005535838 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 25 17:56:47 np0005535838 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 25 17:56:47 np0005535838 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 25 17:56:47 np0005535838 NetworkManager[855]: <info>  [1764111407.7849] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 17:56:47 np0005535838 systemd-udevd[6943]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:56:47 np0005535838 NetworkManager[855]: <info>  [1764111407.8044] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 17:56:47 np0005535838 NetworkManager[855]: <info>  [1764111407.8087] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 25 17:56:47 np0005535838 NetworkManager[855]: <info>  [1764111407.8093] device (eth1): carrier: link connected
Nov 25 17:56:47 np0005535838 NetworkManager[855]: <info>  [1764111407.8096] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 17:56:47 np0005535838 NetworkManager[855]: <info>  [1764111407.8106] policy: auto-activating connection 'Wired connection 1' (395ca42c-36a5-36fb-90b5-378a60416156)
Nov 25 17:56:47 np0005535838 NetworkManager[855]: <info>  [1764111407.8113] device (eth1): Activation: starting connection 'Wired connection 1' (395ca42c-36a5-36fb-90b5-378a60416156)
Nov 25 17:56:47 np0005535838 NetworkManager[855]: <info>  [1764111407.8115] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 17:56:47 np0005535838 NetworkManager[855]: <info>  [1764111407.8118] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 17:56:47 np0005535838 NetworkManager[855]: <info>  [1764111407.8125] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 17:56:47 np0005535838 NetworkManager[855]: <info>  [1764111407.8133] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 17:56:48 np0005535838 python3[6970]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-119a-e3c0-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 17:56:58 np0005535838 python3[7050]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 17:56:59 np0005535838 python3[7123]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764111418.5904531-102-222388212489256/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=deaca1671c15558608e1a78179cd209f2d80334e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:57:00 np0005535838 python3[7173]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 17:57:00 np0005535838 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 17:57:00 np0005535838 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 17:57:00 np0005535838 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 17:57:00 np0005535838 systemd[1]: Stopping Network Manager...
Nov 25 17:57:00 np0005535838 NetworkManager[855]: <info>  [1764111420.2869] caught SIGTERM, shutting down normally.
Nov 25 17:57:00 np0005535838 NetworkManager[855]: <info>  [1764111420.2877] dhcp4 (eth0): canceled DHCP transaction
Nov 25 17:57:00 np0005535838 NetworkManager[855]: <info>  [1764111420.2877] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 17:57:00 np0005535838 NetworkManager[855]: <info>  [1764111420.2877] dhcp4 (eth0): state changed no lease
Nov 25 17:57:00 np0005535838 NetworkManager[855]: <info>  [1764111420.2878] manager: NetworkManager state is now CONNECTING
Nov 25 17:57:00 np0005535838 NetworkManager[855]: <info>  [1764111420.2962] dhcp4 (eth1): canceled DHCP transaction
Nov 25 17:57:00 np0005535838 NetworkManager[855]: <info>  [1764111420.2962] dhcp4 (eth1): state changed no lease
Nov 25 17:57:00 np0005535838 NetworkManager[855]: <info>  [1764111420.3006] exiting (success)
Nov 25 17:57:00 np0005535838 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 17:57:00 np0005535838 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 17:57:00 np0005535838 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 17:57:00 np0005535838 systemd[1]: Stopped Network Manager.
Nov 25 17:57:00 np0005535838 systemd[1]: NetworkManager.service: Consumed 1.622s CPU time, 10.0M memory peak.
Nov 25 17:57:00 np0005535838 systemd[1]: Starting Network Manager...
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.3559] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:3edafa6c-db49-405c-9758-42faad226154)
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.3564] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.3630] manager[0x557215882070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 17:57:00 np0005535838 systemd[1]: Starting Hostname Service...
Nov 25 17:57:00 np0005535838 systemd[1]: Started Hostname Service.
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4341] hostname: hostname: using hostnamed
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4344] hostname: static hostname changed from (none) to "np0005535838.novalocal"
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4351] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4358] manager[0x557215882070]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4358] manager[0x557215882070]: rfkill: WWAN hardware radio set enabled
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4403] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4403] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4404] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4405] manager: Networking is enabled by state file
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4409] settings: Loaded settings plugin: keyfile (internal)
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4416] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4457] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4472] dhcp: init: Using DHCP client 'internal'
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4477] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4484] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4492] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4506] device (lo): Activation: starting connection 'lo' (8a2e98f0-f5c9-4e09-92f1-2bf0997fed4f)
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4518] device (eth0): carrier: link connected
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4526] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4534] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4535] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4546] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4558] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4569] device (eth1): carrier: link connected
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4577] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4585] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (395ca42c-36a5-36fb-90b5-378a60416156) (indicated)
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4586] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4597] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4608] device (eth1): Activation: starting connection 'Wired connection 1' (395ca42c-36a5-36fb-90b5-378a60416156)
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4618] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 17:57:00 np0005535838 systemd[1]: Started Network Manager.
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4624] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4627] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4630] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4635] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4657] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4661] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4664] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4667] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4675] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4678] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4686] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4689] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4703] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4710] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4716] device (lo): Activation: successful, device activated.
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4740] dhcp4 (eth0): state changed new lease, address=38.102.83.77
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4746] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 17:57:00 np0005535838 systemd[1]: Starting Network Manager Wait Online...
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4816] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4833] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4835] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4838] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4842] device (eth0): Activation: successful, device activated.
Nov 25 17:57:00 np0005535838 NetworkManager[7181]: <info>  [1764111420.4847] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 17:57:00 np0005535838 python3[7257]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-119a-e3c0-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 17:57:10 np0005535838 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 17:57:30 np0005535838 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.2822] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 17:57:45 np0005535838 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 17:57:45 np0005535838 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3110] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3113] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3125] device (eth1): Activation: successful, device activated.
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3136] manager: startup complete
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3141] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <warn>  [1764111465.3172] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 25 17:57:45 np0005535838 systemd[1]: Finished Network Manager Wait Online.
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3187] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3273] dhcp4 (eth1): canceled DHCP transaction
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3274] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3274] dhcp4 (eth1): state changed no lease
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3297] policy: auto-activating connection 'ci-private-network' (c033a316-88a7-53bd-b45b-dbc337c105ea)
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3304] device (eth1): Activation: starting connection 'ci-private-network' (c033a316-88a7-53bd-b45b-dbc337c105ea)
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3305] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3309] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3318] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3328] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3825] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3830] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 17:57:45 np0005535838 NetworkManager[7181]: <info>  [1764111465.3845] device (eth1): Activation: successful, device activated.
Nov 25 17:57:55 np0005535838 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 17:58:00 np0005535838 systemd-logind[789]: Session 1 logged out. Waiting for processes to exit.
Nov 25 17:58:01 np0005535838 systemd[4299]: Starting Mark boot as successful...
Nov 25 17:58:01 np0005535838 systemd[4299]: Finished Mark boot as successful.
Nov 25 17:58:01 np0005535838 systemd-logind[789]: New session 3 of user zuul.
Nov 25 17:58:01 np0005535838 systemd[1]: Started Session 3 of User zuul.
Nov 25 17:58:01 np0005535838 python3[7367]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 17:58:02 np0005535838 python3[7440]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764111481.446876-267-171045953555470/source _original_basename=tmp5nlays8v follow=False checksum=03a7293d1f772836c1203af9d59475bc4177093e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 17:58:04 np0005535838 systemd[1]: session-3.scope: Deactivated successfully.
Nov 25 17:58:04 np0005535838 systemd-logind[789]: Session 3 logged out. Waiting for processes to exit.
Nov 25 17:58:04 np0005535838 systemd-logind[789]: Removed session 3.
Nov 25 18:00:56 np0005535838 chronyd[791]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Nov 25 18:01:01 np0005535838 systemd[4299]: Created slice User Background Tasks Slice.
Nov 25 18:01:01 np0005535838 systemd[4299]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 18:01:01 np0005535838 systemd[4299]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 18:05:22 np0005535838 systemd-logind[789]: New session 4 of user zuul.
Nov 25 18:05:22 np0005535838 systemd[1]: Started Session 4 of User zuul.
Nov 25 18:05:22 np0005535838 python3[7543]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-ed86-e3a6-000000001cd6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:05:22 np0005535838 python3[7571]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:05:22 np0005535838 python3[7598]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:05:23 np0005535838 python3[7624]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:05:23 np0005535838 python3[7650]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:05:23 np0005535838 python3[7676]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:05:24 np0005535838 python3[7754]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:05:24 np0005535838 python3[7827]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764111924.1817765-478-259793698352113/source _original_basename=tmp568700zs follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:05:25 np0005535838 python3[7877]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 18:05:25 np0005535838 systemd[1]: Reloading.
Nov 25 18:05:25 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:05:27 np0005535838 python3[7932]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 25 18:05:27 np0005535838 python3[7958]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:05:28 np0005535838 python3[7986]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:05:28 np0005535838 python3[8014]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:05:28 np0005535838 python3[8042]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:05:29 np0005535838 python3[8069]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-ed86-e3a6-000000001cdd-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:05:29 np0005535838 python3[8099]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 18:05:31 np0005535838 systemd[1]: session-4.scope: Deactivated successfully.
Nov 25 18:05:31 np0005535838 systemd[1]: session-4.scope: Consumed 4.927s CPU time.
Nov 25 18:05:31 np0005535838 systemd-logind[789]: Session 4 logged out. Waiting for processes to exit.
Nov 25 18:05:31 np0005535838 systemd-logind[789]: Removed session 4.
Nov 25 18:05:33 np0005535838 systemd-logind[789]: New session 5 of user zuul.
Nov 25 18:05:33 np0005535838 systemd[1]: Started Session 5 of User zuul.
Nov 25 18:05:33 np0005535838 python3[8133]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 18:05:50 np0005535838 kernel: SELinux:  Converting 386 SID table entries...
Nov 25 18:05:50 np0005535838 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 18:05:50 np0005535838 kernel: SELinux:  policy capability open_perms=1
Nov 25 18:05:50 np0005535838 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 18:05:50 np0005535838 kernel: SELinux:  policy capability always_check_network=0
Nov 25 18:05:50 np0005535838 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 18:05:50 np0005535838 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 18:05:50 np0005535838 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 18:06:00 np0005535838 kernel: SELinux:  Converting 386 SID table entries...
Nov 25 18:06:00 np0005535838 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 18:06:00 np0005535838 kernel: SELinux:  policy capability open_perms=1
Nov 25 18:06:00 np0005535838 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 18:06:00 np0005535838 kernel: SELinux:  policy capability always_check_network=0
Nov 25 18:06:00 np0005535838 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 18:06:00 np0005535838 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 18:06:00 np0005535838 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 18:06:08 np0005535838 kernel: SELinux:  Converting 386 SID table entries...
Nov 25 18:06:08 np0005535838 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 18:06:08 np0005535838 kernel: SELinux:  policy capability open_perms=1
Nov 25 18:06:08 np0005535838 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 18:06:08 np0005535838 kernel: SELinux:  policy capability always_check_network=0
Nov 25 18:06:08 np0005535838 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 18:06:08 np0005535838 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 18:06:08 np0005535838 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 18:06:09 np0005535838 setsebool[8200]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 25 18:06:09 np0005535838 setsebool[8200]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 25 18:06:20 np0005535838 kernel: SELinux:  Converting 389 SID table entries...
Nov 25 18:06:20 np0005535838 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 18:06:20 np0005535838 kernel: SELinux:  policy capability open_perms=1
Nov 25 18:06:20 np0005535838 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 18:06:20 np0005535838 kernel: SELinux:  policy capability always_check_network=0
Nov 25 18:06:20 np0005535838 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 18:06:20 np0005535838 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 18:06:20 np0005535838 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 18:06:39 np0005535838 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 18:06:39 np0005535838 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 18:06:39 np0005535838 systemd[1]: Starting man-db-cache-update.service...
Nov 25 18:06:39 np0005535838 systemd[1]: Reloading.
Nov 25 18:06:39 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:06:39 np0005535838 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 18:06:41 np0005535838 python3[10418]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-0553-b758-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:06:42 np0005535838 kernel: evm: overlay not supported
Nov 25 18:06:42 np0005535838 systemd[4299]: Starting D-Bus User Message Bus...
Nov 25 18:06:42 np0005535838 dbus-broker-launch[11414]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 25 18:06:42 np0005535838 dbus-broker-launch[11414]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 25 18:06:42 np0005535838 systemd[4299]: Started D-Bus User Message Bus.
Nov 25 18:06:42 np0005535838 dbus-broker-lau[11414]: Ready
Nov 25 18:06:42 np0005535838 systemd[4299]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 18:06:42 np0005535838 systemd[4299]: Created slice Slice /user.
Nov 25 18:06:42 np0005535838 systemd[4299]: podman-11257.scope: unit configures an IP firewall, but not running as root.
Nov 25 18:06:42 np0005535838 systemd[4299]: (This warning is only shown for the first unit using IP firewalling.)
Nov 25 18:06:42 np0005535838 systemd[4299]: Started podman-11257.scope.
Nov 25 18:06:43 np0005535838 systemd[4299]: Started podman-pause-11b68d86.scope.
Nov 25 18:06:43 np0005535838 python3[11985]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.64:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.64:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:06:43 np0005535838 python3[11985]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 25 18:06:43 np0005535838 systemd-logind[789]: Session 5 logged out. Waiting for processes to exit.
Nov 25 18:06:43 np0005535838 systemd[1]: session-5.scope: Deactivated successfully.
Nov 25 18:06:43 np0005535838 systemd[1]: session-5.scope: Consumed 1min 2.351s CPU time.
Nov 25 18:06:44 np0005535838 systemd-logind[789]: Removed session 5.
Nov 25 18:07:06 np0005535838 systemd-logind[789]: New session 6 of user zuul.
Nov 25 18:07:06 np0005535838 systemd[1]: Started Session 6 of User zuul.
Nov 25 18:07:06 np0005535838 python3[20389]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPM7Qil9AMr2etKuOlKIrXGcRMkAFiz1ts6BOxq0bX4RBlm/XJgvcH7YJWV/REzh/qqlLkHxRdWzWpdOjdNO7dM= zuul@np0005535837.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 18:07:07 np0005535838 python3[20551]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPM7Qil9AMr2etKuOlKIrXGcRMkAFiz1ts6BOxq0bX4RBlm/XJgvcH7YJWV/REzh/qqlLkHxRdWzWpdOjdNO7dM= zuul@np0005535837.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 18:07:08 np0005535838 python3[20810]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005535838.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 25 18:07:08 np0005535838 python3[20988]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPM7Qil9AMr2etKuOlKIrXGcRMkAFiz1ts6BOxq0bX4RBlm/XJgvcH7YJWV/REzh/qqlLkHxRdWzWpdOjdNO7dM= zuul@np0005535837.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 18:07:09 np0005535838 python3[21233]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:07:09 np0005535838 python3[21479]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764112028.8805213-135-84787243895554/source _original_basename=tmp1n04_1_m follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:07:10 np0005535838 python3[21752]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 25 18:07:10 np0005535838 systemd[1]: Starting Hostname Service...
Nov 25 18:07:10 np0005535838 systemd[1]: Started Hostname Service.
Nov 25 18:07:10 np0005535838 systemd-hostnamed[21851]: Changed pretty hostname to 'compute-0'
Nov 25 18:07:10 np0005535838 systemd-hostnamed[21851]: Hostname set to <compute-0> (static)
Nov 25 18:07:10 np0005535838 NetworkManager[7181]: <info>  [1764112030.8148] hostname: static hostname changed from "np0005535838.novalocal" to "compute-0"
Nov 25 18:07:10 np0005535838 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 18:07:10 np0005535838 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 18:07:11 np0005535838 systemd[1]: session-6.scope: Deactivated successfully.
Nov 25 18:07:11 np0005535838 systemd[1]: session-6.scope: Consumed 2.615s CPU time.
Nov 25 18:07:11 np0005535838 systemd-logind[789]: Session 6 logged out. Waiting for processes to exit.
Nov 25 18:07:11 np0005535838 systemd-logind[789]: Removed session 6.
Nov 25 18:07:20 np0005535838 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 18:07:37 np0005535838 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 18:07:37 np0005535838 systemd[1]: Finished man-db-cache-update.service.
Nov 25 18:07:37 np0005535838 systemd[1]: man-db-cache-update.service: Consumed 1min 10.338s CPU time.
Nov 25 18:07:37 np0005535838 systemd[1]: run-r65364839656d4d5385082e5d9b56d764.service: Deactivated successfully.
Nov 25 18:07:40 np0005535838 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 18:08:12 np0005535838 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 25 18:08:12 np0005535838 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 25 18:08:12 np0005535838 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 25 18:08:12 np0005535838 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 25 18:11:05 np0005535838 systemd-logind[789]: New session 7 of user zuul.
Nov 25 18:11:05 np0005535838 systemd[1]: Started Session 7 of User zuul.
Nov 25 18:11:06 np0005535838 python3[30122]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:11:09 np0005535838 python3[30238]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:11:09 np0005535838 python3[30311]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764112268.953095-33647-43402362039189/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:11:10 np0005535838 python3[30337]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:11:10 np0005535838 python3[30410]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764112268.953095-33647-43402362039189/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:11:10 np0005535838 python3[30436]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:11:11 np0005535838 python3[30509]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764112268.953095-33647-43402362039189/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:11:11 np0005535838 python3[30535]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:11:11 np0005535838 python3[30608]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764112268.953095-33647-43402362039189/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:11:12 np0005535838 python3[30634]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:11:12 np0005535838 python3[30707]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764112268.953095-33647-43402362039189/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:11:12 np0005535838 python3[30733]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:11:13 np0005535838 python3[30806]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764112268.953095-33647-43402362039189/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:11:13 np0005535838 python3[30832]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:11:13 np0005535838 python3[30905]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764112268.953095-33647-43402362039189/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:11:25 np0005535838 python3[30963]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:16:25 np0005535838 systemd[1]: session-7.scope: Deactivated successfully.
Nov 25 18:16:25 np0005535838 systemd[1]: session-7.scope: Consumed 5.896s CPU time.
Nov 25 18:16:25 np0005535838 systemd-logind[789]: Session 7 logged out. Waiting for processes to exit.
Nov 25 18:16:25 np0005535838 systemd-logind[789]: Removed session 7.
Nov 25 18:22:18 np0005535838 systemd-logind[789]: New session 8 of user zuul.
Nov 25 18:22:19 np0005535838 systemd[1]: Started Session 8 of User zuul.
Nov 25 18:22:20 np0005535838 python3.9[31642]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:22:21 np0005535838 python3.9[31823]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:22:29 np0005535838 systemd-logind[789]: Session 8 logged out. Waiting for processes to exit.
Nov 25 18:22:29 np0005535838 systemd[1]: session-8.scope: Deactivated successfully.
Nov 25 18:22:29 np0005535838 systemd[1]: session-8.scope: Consumed 8.102s CPU time.
Nov 25 18:22:29 np0005535838 systemd-logind[789]: Removed session 8.
Nov 25 18:22:44 np0005535838 systemd-logind[789]: New session 9 of user zuul.
Nov 25 18:22:44 np0005535838 systemd[1]: Started Session 9 of User zuul.
Nov 25 18:22:45 np0005535838 python3.9[32037]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 18:22:46 np0005535838 python3.9[32211]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:22:47 np0005535838 python3.9[32363]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:22:48 np0005535838 python3.9[32516]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:22:50 np0005535838 python3.9[32668]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:22:50 np0005535838 python3.9[32822]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:22:51 np0005535838 python3.9[32945]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764112970.3555105-73-86464027415954/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:22:52 np0005535838 python3.9[33097]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:22:53 np0005535838 python3.9[33253]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:22:54 np0005535838 python3.9[33405]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:22:55 np0005535838 python3.9[33555]: ansible-ansible.builtin.service_facts Invoked
Nov 25 18:23:01 np0005535838 python3.9[33808]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:23:02 np0005535838 python3.9[33958]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:23:03 np0005535838 python3.9[34112]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:23:04 np0005535838 python3.9[34270]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:23:05 np0005535838 python3.9[34354]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:24:02 np0005535838 systemd[1]: Reloading.
Nov 25 18:24:02 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:24:03 np0005535838 systemd[1]: Starting dnf makecache...
Nov 25 18:24:03 np0005535838 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 25 18:24:03 np0005535838 dnf[34558]: Failed determining last makecache time.
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-openstack-barbican-42b4c41831408a8e323 160 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 193 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-openstack-cinder-1c00d6490d88e436f26ef 195 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-python-stevedore-c4acc5639fd2329372142 193 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-python-observabilityclient-2f31846d73c 195 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-os-net-config-bbae2ed8a159b0435a473f38 190 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 201 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 systemd[1]: Reloading.
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-python-designate-tests-tempest-347fdbc 191 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-openstack-glance-1fd12c29b339f30fe823e 190 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 200 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-openstack-manila-3c01b7181572c95dac462 208 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-python-whitebox-neutron-tests-tempest- 191 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-openstack-octavia-ba397f07a7331190208c 196 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-openstack-watcher-c014f81a8647287f6dcc 181 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-python-tcib-1124124ec06aadbac34f0d340b 199 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 155 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-openstack-swift-dc98a8463506ac520c469a 186 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-python-tempestconf-8515371b7cceebd4282 165 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 dnf[34558]: delorean-openstack-heat-ui-013accbfd179753bc3f0 190 kB/s | 3.0 kB     00:00
Nov 25 18:24:03 np0005535838 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 25 18:24:03 np0005535838 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 25 18:24:03 np0005535838 dnf[34558]: CentOS Stream 9 - BaseOS                         44 kB/s | 6.7 kB     00:00
Nov 25 18:24:03 np0005535838 systemd[1]: Reloading.
Nov 25 18:24:03 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:24:04 np0005535838 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 25 18:24:04 np0005535838 dnf[34558]: CentOS Stream 9 - AppStream                      25 kB/s | 6.8 kB     00:00
Nov 25 18:24:04 np0005535838 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 25 18:24:04 np0005535838 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 25 18:24:04 np0005535838 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 25 18:24:04 np0005535838 dnf[34558]: CentOS Stream 9 - CRB                            60 kB/s | 6.5 kB     00:00
Nov 25 18:24:04 np0005535838 dnf[34558]: CentOS Stream 9 - Extras packages                71 kB/s | 8.3 kB     00:00
Nov 25 18:24:04 np0005535838 dnf[34558]: dlrn-antelope-testing                           108 kB/s | 3.0 kB     00:00
Nov 25 18:24:04 np0005535838 dnf[34558]: dlrn-antelope-build-deps                        113 kB/s | 3.0 kB     00:00
Nov 25 18:24:06 np0005535838 dnf[34558]: centos9-rabbitmq                                1.9 kB/s | 3.0 kB     00:01
Nov 25 18:24:06 np0005535838 dnf[34558]: centos9-storage                                  81 kB/s | 3.0 kB     00:00
Nov 25 18:24:06 np0005535838 dnf[34558]: centos9-opstools                                 75 kB/s | 3.0 kB     00:00
Nov 25 18:24:06 np0005535838 dnf[34558]: NFV SIG OpenvSwitch                              77 kB/s | 3.0 kB     00:00
Nov 25 18:24:06 np0005535838 dnf[34558]: repo-setup-centos-appstream                     126 kB/s | 4.4 kB     00:00
Nov 25 18:24:06 np0005535838 dnf[34558]: repo-setup-centos-baseos                        138 kB/s | 3.9 kB     00:00
Nov 25 18:24:06 np0005535838 dnf[34558]: repo-setup-centos-highavailability               12 kB/s | 3.9 kB     00:00
Nov 25 18:24:06 np0005535838 dnf[34558]: repo-setup-centos-powertools                    141 kB/s | 4.3 kB     00:00
Nov 25 18:24:07 np0005535838 dnf[34558]: Extra Packages for Enterprise Linux 9 - x86_64   91 kB/s |  35 kB     00:00
Nov 25 18:24:07 np0005535838 dnf[34558]: Metadata cache created.
Nov 25 18:24:07 np0005535838 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 25 18:24:07 np0005535838 systemd[1]: Finished dnf makecache.
Nov 25 18:24:07 np0005535838 systemd[1]: dnf-makecache.service: Consumed 1.885s CPU time.
Nov 25 18:25:06 np0005535838 kernel: SELinux:  Converting 2718 SID table entries...
Nov 25 18:25:06 np0005535838 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 18:25:06 np0005535838 kernel: SELinux:  policy capability open_perms=1
Nov 25 18:25:06 np0005535838 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 18:25:06 np0005535838 kernel: SELinux:  policy capability always_check_network=0
Nov 25 18:25:06 np0005535838 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 18:25:06 np0005535838 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 18:25:06 np0005535838 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 18:25:06 np0005535838 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 25 18:25:06 np0005535838 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 18:25:06 np0005535838 systemd[1]: Starting man-db-cache-update.service...
Nov 25 18:25:06 np0005535838 systemd[1]: Reloading.
Nov 25 18:25:06 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:25:07 np0005535838 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 18:25:08 np0005535838 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 18:25:08 np0005535838 systemd[1]: Finished man-db-cache-update.service.
Nov 25 18:25:08 np0005535838 systemd[1]: man-db-cache-update.service: Consumed 1.345s CPU time.
Nov 25 18:25:08 np0005535838 systemd[1]: run-r426503ff246b49428b1c9976a5ca9905.service: Deactivated successfully.
Nov 25 18:25:08 np0005535838 python3.9[35909]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:25:10 np0005535838 python3.9[36192]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 18:25:11 np0005535838 python3.9[36344]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 18:25:13 np0005535838 python3.9[36497]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:25:14 np0005535838 python3.9[36649]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 18:25:15 np0005535838 python3.9[36801]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:25:16 np0005535838 python3.9[36953]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:25:17 np0005535838 python3.9[37076]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113116.212891-236-260308907406611/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=70850e65b5e36c1a89abd37d8f250d78131a9b48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:25:18 np0005535838 python3.9[37228]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:25:19 np0005535838 python3.9[37380]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:25:20 np0005535838 python3.9[37533]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:25:21 np0005535838 python3.9[37685]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 18:25:21 np0005535838 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 18:25:23 np0005535838 irqbalance[783]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 25 18:25:23 np0005535838 irqbalance[783]: IRQ 26 affinity is now unmanaged
Nov 25 18:25:25 np0005535838 python3.9[37839]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 18:25:26 np0005535838 python3.9[37997]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 18:25:27 np0005535838 python3.9[38157]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 18:25:28 np0005535838 python3.9[38312]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 18:25:29 np0005535838 python3.9[38470]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 18:25:30 np0005535838 python3.9[38622]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:25:32 np0005535838 python3.9[38775]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:25:33 np0005535838 python3.9[38927]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:25:33 np0005535838 python3.9[39050]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764113132.6256735-355-130533914106850/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:25:34 np0005535838 python3.9[39202]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:25:35 np0005535838 systemd[1]: Starting Load Kernel Modules...
Nov 25 18:25:35 np0005535838 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 25 18:25:35 np0005535838 kernel: Bridge firewalling registered
Nov 25 18:25:35 np0005535838 systemd-modules-load[39206]: Inserted module 'br_netfilter'
Nov 25 18:25:35 np0005535838 systemd[1]: Finished Load Kernel Modules.
Nov 25 18:25:35 np0005535838 python3.9[39362]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:25:36 np0005535838 python3.9[39485]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764113135.3603024-378-224572589975245/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:25:37 np0005535838 python3.9[39637]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:25:41 np0005535838 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 25 18:25:41 np0005535838 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 25 18:25:41 np0005535838 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 18:25:41 np0005535838 systemd[1]: Starting man-db-cache-update.service...
Nov 25 18:25:41 np0005535838 systemd[1]: Reloading.
Nov 25 18:25:41 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:25:41 np0005535838 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 18:25:43 np0005535838 python3.9[40947]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:25:44 np0005535838 python3.9[41879]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 18:25:44 np0005535838 python3.9[42696]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:25:45 np0005535838 python3.9[43553]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:25:45 np0005535838 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 18:25:45 np0005535838 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 18:25:45 np0005535838 systemd[1]: Finished man-db-cache-update.service.
Nov 25 18:25:45 np0005535838 systemd[1]: man-db-cache-update.service: Consumed 5.555s CPU time.
Nov 25 18:25:45 np0005535838 systemd[1]: run-re9ed9568aac34088a97998bc402a2d45.service: Deactivated successfully.
Nov 25 18:25:46 np0005535838 systemd[1]: Starting Authorization Manager...
Nov 25 18:25:46 np0005535838 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 18:25:46 np0005535838 polkitd[44014]: Started polkitd version 0.117
Nov 25 18:25:46 np0005535838 systemd[1]: Started Authorization Manager.
Nov 25 18:25:47 np0005535838 python3.9[44184]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:25:47 np0005535838 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 18:25:47 np0005535838 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 18:25:47 np0005535838 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 18:25:47 np0005535838 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 18:25:47 np0005535838 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 18:25:48 np0005535838 python3.9[44346]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 18:25:51 np0005535838 python3.9[44498]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:25:51 np0005535838 systemd[1]: Reloading.
Nov 25 18:25:51 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:25:52 np0005535838 python3.9[44690]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:25:52 np0005535838 systemd[1]: Reloading.
Nov 25 18:25:52 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:25:53 np0005535838 python3.9[44878]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:25:54 np0005535838 python3.9[45031]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:25:54 np0005535838 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 25 18:25:55 np0005535838 python3.9[45184]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:25:57 np0005535838 python3.9[45346]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:25:58 np0005535838 python3.9[45499]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:25:58 np0005535838 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 18:25:58 np0005535838 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 18:25:58 np0005535838 systemd[1]: Stopping Apply Kernel Variables...
Nov 25 18:25:58 np0005535838 systemd[1]: Starting Apply Kernel Variables...
Nov 25 18:25:58 np0005535838 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 18:25:58 np0005535838 systemd[1]: Finished Apply Kernel Variables.
Nov 25 18:25:58 np0005535838 systemd[1]: session-9.scope: Deactivated successfully.
Nov 25 18:25:58 np0005535838 systemd[1]: session-9.scope: Consumed 2min 19.879s CPU time.
Nov 25 18:25:58 np0005535838 systemd-logind[789]: Session 9 logged out. Waiting for processes to exit.
Nov 25 18:25:58 np0005535838 systemd-logind[789]: Removed session 9.
Nov 25 18:26:04 np0005535838 systemd-logind[789]: New session 10 of user zuul.
Nov 25 18:26:04 np0005535838 systemd[1]: Started Session 10 of User zuul.
Nov 25 18:26:05 np0005535838 python3.9[45685]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:26:07 np0005535838 python3.9[45841]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 18:26:07 np0005535838 python3.9[45994]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 18:26:08 np0005535838 python3.9[46152]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 18:26:09 np0005535838 python3.9[46312]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:26:10 np0005535838 python3.9[46396]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 18:26:14 np0005535838 python3.9[46561]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:26:24 np0005535838 kernel: SELinux:  Converting 2730 SID table entries...
Nov 25 18:26:24 np0005535838 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 18:26:24 np0005535838 kernel: SELinux:  policy capability open_perms=1
Nov 25 18:26:24 np0005535838 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 18:26:24 np0005535838 kernel: SELinux:  policy capability always_check_network=0
Nov 25 18:26:24 np0005535838 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 18:26:24 np0005535838 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 18:26:24 np0005535838 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 18:26:25 np0005535838 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 25 18:26:25 np0005535838 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 25 18:26:26 np0005535838 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 18:26:26 np0005535838 systemd[1]: Starting man-db-cache-update.service...
Nov 25 18:26:26 np0005535838 systemd[1]: Reloading.
Nov 25 18:26:26 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:26:26 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:26:26 np0005535838 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 18:26:27 np0005535838 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 18:26:27 np0005535838 systemd[1]: Finished man-db-cache-update.service.
Nov 25 18:26:27 np0005535838 systemd[1]: run-r3acf9f046dee491789275c046c8a6594.service: Deactivated successfully.
Nov 25 18:26:28 np0005535838 python3.9[47662]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 18:26:28 np0005535838 systemd[1]: Reloading.
Nov 25 18:26:28 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:26:28 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:26:28 np0005535838 systemd[1]: Starting Open vSwitch Database Unit...
Nov 25 18:26:28 np0005535838 chown[47704]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 25 18:26:29 np0005535838 ovs-ctl[47709]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 25 18:26:29 np0005535838 ovs-ctl[47709]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 25 18:26:29 np0005535838 ovs-ctl[47709]: Starting ovsdb-server [  OK  ]
Nov 25 18:26:29 np0005535838 ovs-vsctl[47758]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 25 18:26:29 np0005535838 ovs-vsctl[47778]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"2ba84045-48af-49e3-86f7-35b32300977f\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 25 18:26:29 np0005535838 ovs-ctl[47709]: Configuring Open vSwitch system IDs [  OK  ]
Nov 25 18:26:29 np0005535838 ovs-ctl[47709]: Enabling remote OVSDB managers [  OK  ]
Nov 25 18:26:29 np0005535838 systemd[1]: Started Open vSwitch Database Unit.
Nov 25 18:26:29 np0005535838 ovs-vsctl[47784]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 18:26:29 np0005535838 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 25 18:26:29 np0005535838 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 25 18:26:29 np0005535838 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 25 18:26:29 np0005535838 kernel: openvswitch: Open vSwitch switching datapath
Nov 25 18:26:29 np0005535838 ovs-ctl[47829]: Inserting openvswitch module [  OK  ]
Nov 25 18:26:29 np0005535838 ovs-ctl[47798]: Starting ovs-vswitchd [  OK  ]
Nov 25 18:26:29 np0005535838 ovs-ctl[47798]: Enabling remote OVSDB managers [  OK  ]
Nov 25 18:26:29 np0005535838 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 25 18:26:29 np0005535838 ovs-vsctl[47851]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 18:26:29 np0005535838 systemd[1]: Starting Open vSwitch...
Nov 25 18:26:29 np0005535838 systemd[1]: Finished Open vSwitch.
Nov 25 18:26:30 np0005535838 python3.9[48002]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:26:32 np0005535838 python3.9[48154]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 18:26:33 np0005535838 kernel: SELinux:  Converting 2744 SID table entries...
Nov 25 18:26:33 np0005535838 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 18:26:33 np0005535838 kernel: SELinux:  policy capability open_perms=1
Nov 25 18:26:33 np0005535838 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 18:26:33 np0005535838 kernel: SELinux:  policy capability always_check_network=0
Nov 25 18:26:33 np0005535838 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 18:26:33 np0005535838 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 18:26:33 np0005535838 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 18:26:34 np0005535838 python3.9[48309]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:26:35 np0005535838 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 25 18:26:35 np0005535838 python3.9[48467]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:26:37 np0005535838 python3.9[48620]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:26:39 np0005535838 python3.9[48907]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 18:26:40 np0005535838 python3.9[49057]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:26:41 np0005535838 python3.9[49211]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:26:42 np0005535838 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 18:26:42 np0005535838 systemd[1]: Starting man-db-cache-update.service...
Nov 25 18:26:43 np0005535838 systemd[1]: Reloading.
Nov 25 18:26:43 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:26:43 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:26:43 np0005535838 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 18:26:43 np0005535838 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 18:26:43 np0005535838 systemd[1]: Finished man-db-cache-update.service.
Nov 25 18:26:43 np0005535838 systemd[1]: run-re9f5fa994f44462194f74842383ab087.service: Deactivated successfully.
Nov 25 18:26:44 np0005535838 python3.9[49527]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:26:44 np0005535838 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 18:26:44 np0005535838 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 18:26:44 np0005535838 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 18:26:44 np0005535838 NetworkManager[7181]: <info>  [1764113204.5166] caught SIGTERM, shutting down normally.
Nov 25 18:26:44 np0005535838 systemd[1]: Stopping Network Manager...
Nov 25 18:26:44 np0005535838 NetworkManager[7181]: <info>  [1764113204.5181] dhcp4 (eth0): canceled DHCP transaction
Nov 25 18:26:44 np0005535838 NetworkManager[7181]: <info>  [1764113204.5181] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 18:26:44 np0005535838 NetworkManager[7181]: <info>  [1764113204.5181] dhcp4 (eth0): state changed no lease
Nov 25 18:26:44 np0005535838 NetworkManager[7181]: <info>  [1764113204.5184] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 18:26:44 np0005535838 NetworkManager[7181]: <info>  [1764113204.5253] exiting (success)
Nov 25 18:26:44 np0005535838 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 18:26:44 np0005535838 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 18:26:44 np0005535838 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 18:26:44 np0005535838 systemd[1]: Stopped Network Manager.
Nov 25 18:26:44 np0005535838 systemd[1]: NetworkManager.service: Consumed 13.174s CPU time, 4.1M memory peak, read 0B from disk, written 30.5K to disk.
Nov 25 18:26:44 np0005535838 systemd[1]: Starting Network Manager...
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.6112] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:3edafa6c-db49-405c-9758-42faad226154)
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.6113] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.6186] manager[0x5631585c4090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 18:26:44 np0005535838 systemd[1]: Starting Hostname Service...
Nov 25 18:26:44 np0005535838 systemd[1]: Started Hostname Service.
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.6955] hostname: hostname: using hostnamed
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.6958] hostname: static hostname changed from (none) to "compute-0"
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.6965] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.6973] manager[0x5631585c4090]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.6973] manager[0x5631585c4090]: rfkill: WWAN hardware radio set enabled
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7014] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7030] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7031] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7032] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7033] manager: Networking is enabled by state file
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7037] settings: Loaded settings plugin: keyfile (internal)
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7044] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7100] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7119] dhcp: init: Using DHCP client 'internal'
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7124] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7135] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7148] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7165] device (lo): Activation: starting connection 'lo' (8a2e98f0-f5c9-4e09-92f1-2bf0997fed4f)
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7179] device (eth0): carrier: link connected
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7189] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7199] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7200] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7215] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7229] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7242] device (eth1): carrier: link connected
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7250] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7262] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (c033a316-88a7-53bd-b45b-dbc337c105ea) (indicated)
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7263] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7275] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7292] device (eth1): Activation: starting connection 'ci-private-network' (c033a316-88a7-53bd-b45b-dbc337c105ea)
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7301] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 18:26:44 np0005535838 systemd[1]: Started Network Manager.
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7329] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7347] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7351] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7355] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7359] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7364] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7369] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7374] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7390] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7395] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7409] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7430] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7447] dhcp4 (eth0): state changed new lease, address=38.102.83.77
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7460] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7557] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7846] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7847] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7853] device (lo): Activation: successful, device activated.
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7860] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7862] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7864] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7866] device (eth1): Activation: successful, device activated.
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7878] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7880] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7883] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7885] device (eth0): Activation: successful, device activated.
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7890] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 18:26:44 np0005535838 NetworkManager[49538]: <info>  [1764113204.7891] manager: startup complete
Nov 25 18:26:44 np0005535838 systemd[1]: Starting Network Manager Wait Online...
Nov 25 18:26:44 np0005535838 systemd[1]: Finished Network Manager Wait Online.
Nov 25 18:26:45 np0005535838 python3.9[49753]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:26:50 np0005535838 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 18:26:50 np0005535838 systemd[1]: Starting man-db-cache-update.service...
Nov 25 18:26:50 np0005535838 systemd[1]: Reloading.
Nov 25 18:26:50 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:26:50 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:26:50 np0005535838 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 18:26:51 np0005535838 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 18:26:51 np0005535838 systemd[1]: Finished man-db-cache-update.service.
Nov 25 18:26:51 np0005535838 systemd[1]: run-rc7d41e3d4ba6474989c0b7b62881b9f7.service: Deactivated successfully.
Nov 25 18:26:52 np0005535838 python3.9[50212]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:26:53 np0005535838 python3.9[50364]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:26:54 np0005535838 python3.9[50518]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:26:54 np0005535838 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 18:26:54 np0005535838 python3.9[50672]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:26:55 np0005535838 python3.9[50824]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:26:56 np0005535838 python3.9[50976]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:26:57 np0005535838 python3.9[51128]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:26:58 np0005535838 python3.9[51251]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113216.7272477-229-211187757122222/.source _original_basename=.5htp7qku follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:26:58 np0005535838 python3.9[51403]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:26:59 np0005535838 python3.9[51555]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 25 18:27:00 np0005535838 python3.9[51707]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:27:03 np0005535838 python3.9[52134]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 25 18:27:04 np0005535838 ansible-async_wrapper.py[52309]: Invoked with j269707048957 300 /home/zuul/.ansible/tmp/ansible-tmp-1764113223.5894563-295-33890118623343/AnsiballZ_edpm_os_net_config.py _
Nov 25 18:27:04 np0005535838 ansible-async_wrapper.py[52312]: Starting module and watcher
Nov 25 18:27:04 np0005535838 ansible-async_wrapper.py[52312]: Start watching 52313 (300)
Nov 25 18:27:04 np0005535838 ansible-async_wrapper.py[52313]: Start module (52313)
Nov 25 18:27:04 np0005535838 ansible-async_wrapper.py[52309]: Return async_wrapper task started.
Nov 25 18:27:04 np0005535838 python3.9[52314]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 25 18:27:05 np0005535838 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 25 18:27:05 np0005535838 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 25 18:27:05 np0005535838 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 25 18:27:05 np0005535838 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 25 18:27:05 np0005535838 kernel: cfg80211: failed to load regulatory.db
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.0918] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.0942] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1810] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1813] audit: op="connection-add" uuid="9c708b3a-69d0-43ee-b403-d7301cd129e2" name="br-ex-br" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1836] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1838] audit: op="connection-add" uuid="50921d83-0472-4957-b860-618d86855943" name="br-ex-port" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1859] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1861] audit: op="connection-add" uuid="a27ac2e2-e8a9-45af-94be-b51b58d97c3f" name="eth1-port" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1881] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1884] audit: op="connection-add" uuid="d33b806a-8180-49b2-85e2-c860ee2e7ae4" name="vlan20-port" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1905] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1908] audit: op="connection-add" uuid="1680c4e9-1e1a-4b7b-8844-e59f320c8c3e" name="vlan21-port" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1928] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1931] audit: op="connection-add" uuid="c1877d06-7c4e-4fcf-9774-985a89a4720e" name="vlan22-port" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1950] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1953] audit: op="connection-add" uuid="4c7ef48f-7cfd-49bd-b160-32bf38bb9924" name="vlan23-port" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.1986] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.timestamp,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2015] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2018] audit: op="connection-add" uuid="a3c3a56d-712c-4672-be0f-5a239aa74094" name="br-ex-if" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2081] audit: op="connection-update" uuid="c033a316-88a7-53bd-b45b-dbc337c105ea" name="ci-private-network" args="ipv4.dns,ipv4.never-default,ipv4.addresses,ipv4.routes,ipv4.routing-rules,ipv4.method,connection.slave-type,connection.controller,connection.port-type,connection.master,connection.timestamp,ipv6.addr-gen-mode,ipv6.dns,ipv6.addresses,ipv6.routes,ipv6.routing-rules,ipv6.method,ovs-interface.type,ovs-external-ids.data" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2113] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2116] audit: op="connection-add" uuid="8ae29937-410c-4a40-9ccd-249a4bb9fd1b" name="vlan20-if" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2144] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2147] audit: op="connection-add" uuid="fa6b6eb4-bb5d-4723-8e70-6eedb5b870f6" name="vlan21-if" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2175] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2177] audit: op="connection-add" uuid="95a7d88b-877c-4534-856e-447059dcc871" name="vlan22-if" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2205] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2208] audit: op="connection-add" uuid="814c5f06-5e24-4818-8c00-db9bc8b02641" name="vlan23-if" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2227] audit: op="connection-delete" uuid="395ca42c-36a5-36fb-90b5-378a60416156" name="Wired connection 1" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2246] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2262] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2268] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (9c708b3a-69d0-43ee-b403-d7301cd129e2)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2269] audit: op="connection-activate" uuid="9c708b3a-69d0-43ee-b403-d7301cd129e2" name="br-ex-br" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2272] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2283] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2289] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (50921d83-0472-4957-b860-618d86855943)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2293] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2302] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2309] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (a27ac2e2-e8a9-45af-94be-b51b58d97c3f)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2312] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2324] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2331] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (d33b806a-8180-49b2-85e2-c860ee2e7ae4)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2334] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2344] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2351] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (1680c4e9-1e1a-4b7b-8844-e59f320c8c3e)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2354] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2364] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2371] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (c1877d06-7c4e-4fcf-9774-985a89a4720e)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2374] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2385] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2390] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (4c7ef48f-7cfd-49bd-b160-32bf38bb9924)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2391] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2394] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2396] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2402] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2408] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2412] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (a3c3a56d-712c-4672-be0f-5a239aa74094)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2413] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2416] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2418] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2419] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2421] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2434] device (eth1): disconnecting for new activation request.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2435] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2446] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2450] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2453] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2458] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2468] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2476] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (8ae29937-410c-4a40-9ccd-249a4bb9fd1b)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2477] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2482] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2486] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2489] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2494] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2503] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2510] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (fa6b6eb4-bb5d-4723-8e70-6eedb5b870f6)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2512] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2518] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2523] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2525] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2530] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2539] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2548] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (95a7d88b-877c-4534-856e-447059dcc871)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2549] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2553] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2555] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2557] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2560] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2566] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2570] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (814c5f06-5e24-4818-8c00-db9bc8b02641)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2572] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2575] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2578] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2579] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2581] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2597] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2602] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2607] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2609] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2627] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2632] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2639] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 kernel: ovs-system: entered promiscuous mode
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2643] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2645] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2654] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2660] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2666] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2670] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2677] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2684] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2689] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 kernel: Timeout policy base is empty
Nov 25 18:27:07 np0005535838 systemd-udevd[52321]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2691] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2699] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2706] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2712] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2714] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2720] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2727] dhcp4 (eth0): canceled DHCP transaction
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2727] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2727] dhcp4 (eth0): state changed no lease
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2729] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2742] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 25 18:27:07 np0005535838 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2760] audit: op="device-reapply" interface="eth1" ifindex=3 pid=52317 uid=0 result="fail" reason="Device is not activated"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2768] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2790] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2794] dhcp4 (eth0): state changed new lease, address=38.102.83.77
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2852] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2944] device (eth1): Activation: starting connection 'ci-private-network' (c033a316-88a7-53bd-b45b-dbc337c105ea)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2951] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2961] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2974] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2983] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.2990] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3002] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3010] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3018] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3022] device (eth1): released from controller device eth1
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3032] device (eth1): disconnecting for new activation request.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3033] audit: op="connection-activate" uuid="c033a316-88a7-53bd-b45b-dbc337c105ea" name="ci-private-network" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3034] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3036] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3038] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3041] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3043] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3045] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3049] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3056] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3063] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3070] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3075] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3081] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3086] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3093] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3098] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3105] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3110] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3116] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3122] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3133] device (eth1): Activation: starting connection 'ci-private-network' (c033a316-88a7-53bd-b45b-dbc337c105ea)
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3154] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3159] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3165] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52317 uid=0 result="success"
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3167] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 kernel: br-ex: entered promiscuous mode
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3211] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3216] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3259] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3262] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3271] device (eth1): Activation: successful, device activated.
Nov 25 18:27:07 np0005535838 kernel: vlan22: entered promiscuous mode
Nov 25 18:27:07 np0005535838 systemd-udevd[52322]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3434] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3449] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3516] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3519] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3528] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 18:27:07 np0005535838 kernel: vlan21: entered promiscuous mode
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3589] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 25 18:27:07 np0005535838 systemd-udevd[52421]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3618] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3674] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3677] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3687] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 18:27:07 np0005535838 kernel: vlan20: entered promiscuous mode
Nov 25 18:27:07 np0005535838 kernel: vlan23: entered promiscuous mode
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3812] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3829] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3900] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3905] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3917] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3936] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.3957] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.4013] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.4015] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.4023] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.4036] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.4064] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.4116] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.4121] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 18:27:07 np0005535838 NetworkManager[49538]: <info>  [1764113227.4133] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 18:27:08 np0005535838 python3.9[52680]: ansible-ansible.legacy.async_status Invoked with jid=j269707048957.52309 mode=status _async_dir=/root/.ansible_async
Nov 25 18:27:08 np0005535838 NetworkManager[49538]: <info>  [1764113228.5580] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52317 uid=0 result="success"
Nov 25 18:27:08 np0005535838 NetworkManager[49538]: <info>  [1764113228.8320] checkpoint[0x56315859a950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 25 18:27:08 np0005535838 NetworkManager[49538]: <info>  [1764113228.8327] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52317 uid=0 result="success"
Nov 25 18:27:09 np0005535838 NetworkManager[49538]: <info>  [1764113229.3659] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52317 uid=0 result="success"
Nov 25 18:27:09 np0005535838 NetworkManager[49538]: <info>  [1764113229.3675] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52317 uid=0 result="success"
Nov 25 18:27:09 np0005535838 ansible-async_wrapper.py[52312]: 52313 still running (300)
Nov 25 18:27:09 np0005535838 NetworkManager[49538]: <info>  [1764113229.6038] audit: op="networking-control" arg="global-dns-configuration" pid=52317 uid=0 result="success"
Nov 25 18:27:09 np0005535838 NetworkManager[49538]: <info>  [1764113229.6064] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 25 18:27:09 np0005535838 NetworkManager[49538]: <info>  [1764113229.6086] audit: op="networking-control" arg="global-dns-configuration" pid=52317 uid=0 result="success"
Nov 25 18:27:09 np0005535838 NetworkManager[49538]: <info>  [1764113229.6538] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52317 uid=0 result="success"
Nov 25 18:27:09 np0005535838 NetworkManager[49538]: <info>  [1764113229.8489] checkpoint[0x56315859aa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 25 18:27:09 np0005535838 NetworkManager[49538]: <info>  [1764113229.8496] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52317 uid=0 result="success"
Nov 25 18:27:09 np0005535838 ansible-async_wrapper.py[52313]: Module complete (52313)
Nov 25 18:27:11 np0005535838 python3.9[52786]: ansible-ansible.legacy.async_status Invoked with jid=j269707048957.52309 mode=status _async_dir=/root/.ansible_async
Nov 25 18:27:12 np0005535838 python3.9[52886]: ansible-ansible.legacy.async_status Invoked with jid=j269707048957.52309 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 18:27:13 np0005535838 python3.9[53038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:27:14 np0005535838 python3.9[53161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113232.8052025-322-693976135028/.source.returncode _original_basename=.9i55egcd follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:27:14 np0005535838 ansible-async_wrapper.py[52312]: Done in kid B.
Nov 25 18:27:14 np0005535838 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 18:27:14 np0005535838 python3.9[53316]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:27:15 np0005535838 python3.9[53440]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113234.3899596-338-276113039052793/.source.cfg _original_basename=.9frrghco follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:27:16 np0005535838 python3.9[53592]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:27:16 np0005535838 systemd[1]: Reloading Network Manager...
Nov 25 18:27:16 np0005535838 NetworkManager[49538]: <info>  [1764113236.6988] audit: op="reload" arg="0" pid=53596 uid=0 result="success"
Nov 25 18:27:16 np0005535838 NetworkManager[49538]: <info>  [1764113236.6999] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 25 18:27:16 np0005535838 systemd[1]: Reloaded Network Manager.
Nov 25 18:27:17 np0005535838 systemd-logind[789]: Session 10 logged out. Waiting for processes to exit.
Nov 25 18:27:17 np0005535838 systemd[1]: session-10.scope: Deactivated successfully.
Nov 25 18:27:17 np0005535838 systemd[1]: session-10.scope: Consumed 53.532s CPU time.
Nov 25 18:27:17 np0005535838 systemd-logind[789]: Removed session 10.
Nov 25 18:27:22 np0005535838 systemd-logind[789]: New session 11 of user zuul.
Nov 25 18:27:22 np0005535838 systemd[1]: Started Session 11 of User zuul.
Nov 25 18:27:23 np0005535838 python3.9[53780]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:27:24 np0005535838 python3.9[53934]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:27:26 np0005535838 python3.9[54128]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:27:26 np0005535838 systemd[1]: session-11.scope: Deactivated successfully.
Nov 25 18:27:26 np0005535838 systemd[1]: session-11.scope: Consumed 2.816s CPU time.
Nov 25 18:27:26 np0005535838 systemd-logind[789]: Session 11 logged out. Waiting for processes to exit.
Nov 25 18:27:26 np0005535838 systemd-logind[789]: Removed session 11.
Nov 25 18:27:26 np0005535838 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 18:27:32 np0005535838 systemd-logind[789]: New session 12 of user zuul.
Nov 25 18:27:32 np0005535838 systemd[1]: Started Session 12 of User zuul.
Nov 25 18:27:33 np0005535838 python3.9[54310]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:27:34 np0005535838 python3.9[54465]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:27:35 np0005535838 python3.9[54621]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:27:36 np0005535838 python3.9[54705]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:27:38 np0005535838 python3.9[54861]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:27:40 np0005535838 python3.9[55056]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:27:40 np0005535838 python3.9[55208]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:27:41 np0005535838 systemd[1]: var-lib-containers-storage-overlay-compat2330883822-merged.mount: Deactivated successfully.
Nov 25 18:27:41 np0005535838 podman[55209]: 2025-11-25 23:27:41.032096284 +0000 UTC m=+0.076266489 system refresh
Nov 25 18:27:42 np0005535838 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 18:27:42 np0005535838 python3.9[55371]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:27:43 np0005535838 python3.9[55494]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113261.2892268-79-61236339164330/.source.json follow=False _original_basename=podman_network_config.j2 checksum=f280372936b849c8e6221e8789aa3704a9a98b1a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:27:43 np0005535838 python3.9[55646]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:27:44 np0005535838 python3.9[55769]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764113263.2862916-94-10864311615966/.source.conf follow=False _original_basename=registries.conf.j2 checksum=485c636425e28137b9c2e788e9d5fc748a88106d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:27:45 np0005535838 python3.9[55921]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:27:46 np0005535838 python3.9[56073]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:27:47 np0005535838 python3.9[56225]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:27:47 np0005535838 python3.9[56377]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:27:48 np0005535838 python3.9[56529]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:27:50 np0005535838 python3.9[56682]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:27:51 np0005535838 python3.9[56836]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:27:52 np0005535838 python3.9[56988]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:27:53 np0005535838 python3.9[57140]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:27:54 np0005535838 python3.9[57293]: ansible-service_facts Invoked
Nov 25 18:27:54 np0005535838 network[57310]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 18:27:54 np0005535838 network[57311]: 'network-scripts' will be removed from distribution in near future.
Nov 25 18:27:54 np0005535838 network[57312]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 18:28:00 np0005535838 python3.9[57764]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:28:02 np0005535838 python3.9[57917]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 18:28:04 np0005535838 python3.9[58069]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:28:04 np0005535838 python3.9[58194]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113283.4254043-238-80182029626208/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:28:05 np0005535838 python3.9[58348]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:28:06 np0005535838 python3.9[58473]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113285.037947-253-33826834532301/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:28:07 np0005535838 python3.9[58627]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:28:08 np0005535838 python3.9[58781]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:28:10 np0005535838 python3.9[58865]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:28:11 np0005535838 python3.9[59019]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:28:12 np0005535838 python3.9[59103]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:28:12 np0005535838 chronyd[791]: chronyd exiting
Nov 25 18:28:12 np0005535838 systemd[1]: Stopping NTP client/server...
Nov 25 18:28:12 np0005535838 systemd[1]: chronyd.service: Deactivated successfully.
Nov 25 18:28:12 np0005535838 systemd[1]: Stopped NTP client/server.
Nov 25 18:28:12 np0005535838 systemd[1]: Starting NTP client/server...
Nov 25 18:28:12 np0005535838 chronyd[59112]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 18:28:12 np0005535838 chronyd[59112]: Frequency -26.084 +/- 0.082 ppm read from /var/lib/chrony/drift
Nov 25 18:28:12 np0005535838 chronyd[59112]: Loaded seccomp filter (level 2)
Nov 25 18:28:12 np0005535838 systemd[1]: Started NTP client/server.
Nov 25 18:28:12 np0005535838 systemd[1]: session-12.scope: Deactivated successfully.
Nov 25 18:28:12 np0005535838 systemd[1]: session-12.scope: Consumed 28.618s CPU time.
Nov 25 18:28:12 np0005535838 systemd-logind[789]: Session 12 logged out. Waiting for processes to exit.
Nov 25 18:28:12 np0005535838 systemd-logind[789]: Removed session 12.
Nov 25 18:28:18 np0005535838 systemd-logind[789]: New session 13 of user zuul.
Nov 25 18:28:18 np0005535838 systemd[1]: Started Session 13 of User zuul.
Nov 25 18:28:19 np0005535838 python3.9[59293]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:28:20 np0005535838 python3.9[59447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:28:21 np0005535838 python3.9[59570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113299.3208683-34-265652927229391/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:28:21 np0005535838 systemd[1]: session-13.scope: Deactivated successfully.
Nov 25 18:28:21 np0005535838 systemd[1]: session-13.scope: Consumed 2.188s CPU time.
Nov 25 18:28:21 np0005535838 systemd-logind[789]: Session 13 logged out. Waiting for processes to exit.
Nov 25 18:28:21 np0005535838 systemd-logind[789]: Removed session 13.
Nov 25 18:28:27 np0005535838 systemd-logind[789]: New session 14 of user zuul.
Nov 25 18:28:27 np0005535838 systemd[1]: Started Session 14 of User zuul.
Nov 25 18:28:28 np0005535838 python3.9[59750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:28:29 np0005535838 python3.9[59906]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:28:30 np0005535838 python3.9[60081]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:28:31 np0005535838 python3.9[60204]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764113310.0955813-41-122739551407590/.source.json _original_basename=.ukar7_1n follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:28:32 np0005535838 python3.9[60356]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:28:33 np0005535838 python3.9[60479]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113312.256838-64-252325806125914/.source _original_basename=.k99ai_wz follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:28:34 np0005535838 python3.9[60631]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:28:35 np0005535838 python3.9[60783]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:28:35 np0005535838 python3.9[60906]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764113314.6953118-88-192699240340847/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:28:36 np0005535838 python3.9[61058]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:28:37 np0005535838 python3.9[61181]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764113316.1433115-88-262067154497655/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:28:38 np0005535838 python3.9[61333]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:28:39 np0005535838 python3.9[61485]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:28:39 np0005535838 python3.9[61608]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113318.6002824-125-66351522289677/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:28:40 np0005535838 python3.9[61760]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:28:41 np0005535838 python3.9[61883]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113320.090793-140-47688736739004/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:28:42 np0005535838 python3.9[62035]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:28:42 np0005535838 systemd[1]: Reloading.
Nov 25 18:28:42 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:28:42 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:28:42 np0005535838 systemd[1]: Reloading.
Nov 25 18:28:43 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:28:43 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:28:43 np0005535838 systemd[1]: Starting EDPM Container Shutdown...
Nov 25 18:28:43 np0005535838 systemd[1]: Finished EDPM Container Shutdown.
Nov 25 18:28:44 np0005535838 python3.9[62262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:28:44 np0005535838 python3.9[62385]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113323.511341-163-185667835254116/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:28:45 np0005535838 python3.9[62537]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:28:46 np0005535838 python3.9[62660]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113325.0181072-178-183837825266431/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:28:47 np0005535838 python3.9[62812]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:28:47 np0005535838 systemd[1]: Reloading.
Nov 25 18:28:47 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:28:47 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:28:47 np0005535838 systemd[1]: Reloading.
Nov 25 18:28:47 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:28:47 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:28:47 np0005535838 systemd[1]: Starting Create netns directory...
Nov 25 18:28:47 np0005535838 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 18:28:47 np0005535838 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 18:28:47 np0005535838 systemd[1]: Finished Create netns directory.
Nov 25 18:28:48 np0005535838 python3.9[63037]: ansible-ansible.builtin.service_facts Invoked
Nov 25 18:28:48 np0005535838 network[63054]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 18:28:48 np0005535838 network[63055]: 'network-scripts' will be removed from distribution in near future.
Nov 25 18:28:48 np0005535838 network[63056]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 18:28:52 np0005535838 python3.9[63318]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:28:52 np0005535838 systemd[1]: Reloading.
Nov 25 18:28:53 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:28:53 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:28:53 np0005535838 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 25 18:28:53 np0005535838 iptables.init[63358]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 25 18:28:53 np0005535838 iptables.init[63358]: iptables: Flushing firewall rules: [  OK  ]
Nov 25 18:28:53 np0005535838 systemd[1]: iptables.service: Deactivated successfully.
Nov 25 18:28:53 np0005535838 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 25 18:28:54 np0005535838 python3.9[63554]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:28:55 np0005535838 python3.9[63708]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:28:55 np0005535838 systemd[1]: Reloading.
Nov 25 18:28:55 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:28:55 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:28:56 np0005535838 systemd[1]: Starting Netfilter Tables...
Nov 25 18:28:56 np0005535838 systemd[1]: Finished Netfilter Tables.
Nov 25 18:28:57 np0005535838 python3.9[63899]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:28:58 np0005535838 python3.9[64052]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:28:59 np0005535838 python3.9[64177]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113337.7305067-247-244204145258460/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:00 np0005535838 python3.9[64330]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:29:00 np0005535838 systemd[1]: Reloading OpenSSH server daemon...
Nov 25 18:29:00 np0005535838 systemd[1]: Reloaded OpenSSH server daemon.
Nov 25 18:29:01 np0005535838 python3.9[64486]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:01 np0005535838 python3.9[64638]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:29:02 np0005535838 python3.9[64761]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113341.344018-278-139466720103099/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:03 np0005535838 python3.9[64913]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 18:29:03 np0005535838 systemd[1]: Starting Time & Date Service...
Nov 25 18:29:03 np0005535838 systemd[1]: Started Time & Date Service.
Nov 25 18:29:04 np0005535838 python3.9[65069]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:05 np0005535838 python3.9[65221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:29:06 np0005535838 python3.9[65344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113345.1033528-313-277119713110126/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:07 np0005535838 python3.9[65496]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:29:07 np0005535838 python3.9[65619]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113346.5439913-328-227392112409970/.source.yaml _original_basename=.4ugmoiyc follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:08 np0005535838 python3.9[65771]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:29:09 np0005535838 python3.9[65894]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113348.0520525-343-156160704396082/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:10 np0005535838 python3.9[66046]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:29:11 np0005535838 python3.9[66199]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:29:12 np0005535838 python3[66352]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 18:29:13 np0005535838 python3.9[66504]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:29:13 np0005535838 python3.9[66627]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113352.3605719-382-266148416733782/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:14 np0005535838 python3.9[66781]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:29:15 np0005535838 python3.9[66904]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113353.9899564-397-73435920212215/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:15 np0005535838 python3.9[67056]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:29:16 np0005535838 python3.9[67179]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113355.427047-412-95570621061106/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:17 np0005535838 python3.9[67331]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:29:18 np0005535838 python3.9[67454]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113356.769764-427-49414068275958/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:18 np0005535838 python3.9[67606]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:29:19 np0005535838 python3.9[67729]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113358.3091595-442-55445241155620/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:20 np0005535838 python3.9[67881]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:21 np0005535838 python3.9[68033]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:29:22 np0005535838 python3.9[68192]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:23 np0005535838 python3.9[68345]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:23 np0005535838 python3.9[68497]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:25 np0005535838 python3.9[68649]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 18:29:25 np0005535838 python3.9[68802]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 18:29:26 np0005535838 systemd[1]: session-14.scope: Deactivated successfully.
Nov 25 18:29:26 np0005535838 systemd[1]: session-14.scope: Consumed 43.757s CPU time.
Nov 25 18:29:26 np0005535838 systemd-logind[789]: Session 14 logged out. Waiting for processes to exit.
Nov 25 18:29:26 np0005535838 systemd-logind[789]: Removed session 14.
Nov 25 18:29:31 np0005535838 systemd-logind[789]: New session 15 of user zuul.
Nov 25 18:29:31 np0005535838 systemd[1]: Started Session 15 of User zuul.
Nov 25 18:29:32 np0005535838 python3.9[68983]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 18:29:33 np0005535838 python3.9[69135]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:29:33 np0005535838 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 18:29:34 np0005535838 python3.9[69289]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:29:36 np0005535838 python3.9[69443]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxrfdY9cGWIXdy/1Oy3o25kkem+UfkfNZM3QAaYeemr9vZEt0Kpt4rTEaZjtK/HkgMSoli0ko2twHhREfmcDjCZiEvSPhpr9yvJyxLe6m3r7nR2fIVc/1+5SeUdcJGWT8hvgD5okMZtCerl/MiW6+tFRt7Ar6X2TFlwXPjq3wia85WpL7X9vq40wZz0XlbpQxNxcEJWeVajcrd63Qib0m1FmhnmHPUqLHN0WmxXnMtONzo4fUQjq3zn230bIZCmjbFatl10s4NRy2udfAA7Xi0ubCZxQ/E8omg7y4ZxA94dJHZPmkCFSVLZUqdW3S3Ofhcem+PFVKRR2UvfcYHi79G6lS5brk3pbHqdyjd4/3scYp3aXFFt7ErEEhVud762RLGAHeACGlJQxmX8B/FbnWmbkw8BfptrYtzSuSqIXmN3UXrLrmfRrB+IMcIbbs/vzVMk6n6BzUjdXscFfnPltHEyvmdeIEBDyC5FLoJ2bTTrQpLt63pLIU09IA55rhBA+E=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBQX5RNdc24Y/t6cF9q9hL3e4G9bhmnpPT/NJWIujGtr#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKt48jJg/HSNlIL9ftEIQgyUPOj8qZ1KotNNqzrVPi+UhJTDsaDnHI9k4z0iWOz87RQtpHNoPDx9+/vOjXzjj4o=#012 create=True mode=0644 path=/tmp/ansible.jpe4dkbb state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:37 np0005535838 python3.9[69595]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.jpe4dkbb' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:29:37 np0005535838 python3.9[69749]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.jpe4dkbb state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:38 np0005535838 systemd[1]: session-15.scope: Deactivated successfully.
Nov 25 18:29:38 np0005535838 systemd[1]: session-15.scope: Consumed 4.263s CPU time.
Nov 25 18:29:38 np0005535838 systemd-logind[789]: Session 15 logged out. Waiting for processes to exit.
Nov 25 18:29:38 np0005535838 systemd-logind[789]: Removed session 15.
Nov 25 18:29:43 np0005535838 systemd-logind[789]: New session 16 of user zuul.
Nov 25 18:29:43 np0005535838 systemd[1]: Started Session 16 of User zuul.
Nov 25 18:29:44 np0005535838 python3.9[69927]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:29:46 np0005535838 python3.9[70083]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 18:29:47 np0005535838 python3.9[70237]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:29:48 np0005535838 python3.9[70390]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:29:49 np0005535838 python3.9[70543]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:29:49 np0005535838 python3.9[70697]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:29:50 np0005535838 python3.9[70854]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:29:51 np0005535838 systemd[1]: session-16.scope: Deactivated successfully.
Nov 25 18:29:51 np0005535838 systemd[1]: session-16.scope: Consumed 5.062s CPU time.
Nov 25 18:29:51 np0005535838 systemd-logind[789]: Session 16 logged out. Waiting for processes to exit.
Nov 25 18:29:51 np0005535838 systemd-logind[789]: Removed session 16.
Nov 25 18:29:57 np0005535838 systemd-logind[789]: New session 17 of user zuul.
Nov 25 18:29:57 np0005535838 systemd[1]: Started Session 17 of User zuul.
Nov 25 18:29:58 np0005535838 python3.9[71032]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:29:59 np0005535838 python3.9[71190]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:30:00 np0005535838 python3.9[71274]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 18:30:02 np0005535838 python3.9[71425]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:30:04 np0005535838 python3.9[71576]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 18:30:04 np0005535838 python3.9[71726]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:30:04 np0005535838 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 18:30:05 np0005535838 python3.9[71877]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:30:06 np0005535838 systemd[1]: session-17.scope: Deactivated successfully.
Nov 25 18:30:06 np0005535838 systemd[1]: session-17.scope: Consumed 6.701s CPU time.
Nov 25 18:30:06 np0005535838 systemd-logind[789]: Session 17 logged out. Waiting for processes to exit.
Nov 25 18:30:06 np0005535838 systemd-logind[789]: Removed session 17.
Nov 25 18:30:13 np0005535838 systemd-logind[789]: New session 18 of user zuul.
Nov 25 18:30:13 np0005535838 systemd[1]: Started Session 18 of User zuul.
Nov 25 18:30:19 np0005535838 python3[72643]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:30:21 np0005535838 python3[72738]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 18:30:22 np0005535838 chronyd[59112]: Selected source 206.108.0.133 (pool.ntp.org)
Nov 25 18:30:23 np0005535838 python3[72765]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 18:30:23 np0005535838 python3[72791]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:30:23 np0005535838 kernel: loop: module loaded
Nov 25 18:30:23 np0005535838 kernel: loop3: detected capacity change from 0 to 41943040
Nov 25 18:30:24 np0005535838 python3[72825]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:30:24 np0005535838 lvm[72828]: PV /dev/loop3 not used.
Nov 25 18:30:24 np0005535838 lvm[72830]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 18:30:24 np0005535838 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 25 18:30:24 np0005535838 lvm[72836]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 25 18:30:24 np0005535838 lvm[72840]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 18:30:24 np0005535838 lvm[72840]: VG ceph_vg0 finished
Nov 25 18:30:24 np0005535838 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 25 18:30:24 np0005535838 python3[72918]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:30:25 np0005535838 python3[72991]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764113424.4942021-36371-56012506251373/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:30:26 np0005535838 python3[73041]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:30:26 np0005535838 systemd[1]: Reloading.
Nov 25 18:30:26 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:30:26 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:30:26 np0005535838 systemd[1]: Starting Ceph OSD losetup...
Nov 25 18:30:26 np0005535838 bash[73084]: /dev/loop3: [64513]:4194933 (/var/lib/ceph-osd-0.img)
Nov 25 18:30:26 np0005535838 systemd[1]: Finished Ceph OSD losetup.
Nov 25 18:30:26 np0005535838 lvm[73085]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 18:30:26 np0005535838 lvm[73085]: VG ceph_vg0 finished
Nov 25 18:30:26 np0005535838 python3[73111]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 18:30:28 np0005535838 python3[73138]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 18:30:28 np0005535838 python3[73164]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:30:28 np0005535838 kernel: loop4: detected capacity change from 0 to 41943040
Nov 25 18:30:29 np0005535838 python3[73196]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:30:29 np0005535838 lvm[73199]: PV /dev/loop4 not used.
Nov 25 18:30:29 np0005535838 lvm[73201]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 18:30:29 np0005535838 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 25 18:30:29 np0005535838 lvm[73205]:  1 logical volume(s) in volume group "ceph_vg1" now active
Nov 25 18:30:29 np0005535838 lvm[73212]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 18:30:29 np0005535838 lvm[73212]: VG ceph_vg1 finished
Nov 25 18:30:29 np0005535838 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 25 18:30:30 np0005535838 python3[73290]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:30:30 np0005535838 python3[73363]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764113429.7840981-36398-8534027427791/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:30:31 np0005535838 python3[73413]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:30:31 np0005535838 systemd[1]: Reloading.
Nov 25 18:30:31 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:30:31 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:30:31 np0005535838 systemd[1]: Starting Ceph OSD losetup...
Nov 25 18:30:31 np0005535838 bash[73454]: /dev/loop4: [64513]:4327939 (/var/lib/ceph-osd-1.img)
Nov 25 18:30:31 np0005535838 systemd[1]: Finished Ceph OSD losetup.
Nov 25 18:30:31 np0005535838 lvm[73455]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 18:30:31 np0005535838 lvm[73455]: VG ceph_vg1 finished
Nov 25 18:30:31 np0005535838 python3[73481]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 18:30:33 np0005535838 python3[73508]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 18:30:33 np0005535838 python3[73534]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:30:33 np0005535838 kernel: loop5: detected capacity change from 0 to 41943040
Nov 25 18:30:34 np0005535838 python3[73566]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:30:34 np0005535838 lvm[73569]: PV /dev/loop5 not used.
Nov 25 18:30:34 np0005535838 lvm[73571]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 18:30:34 np0005535838 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 25 18:30:34 np0005535838 lvm[73579]:  1 logical volume(s) in volume group "ceph_vg2" now active
Nov 25 18:30:34 np0005535838 lvm[73582]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 18:30:34 np0005535838 lvm[73582]: VG ceph_vg2 finished
Nov 25 18:30:34 np0005535838 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 25 18:30:35 np0005535838 python3[73660]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:30:35 np0005535838 python3[73733]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764113434.7399487-36425-260303206291209/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:30:36 np0005535838 python3[73783]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:30:36 np0005535838 systemd[1]: Reloading.
Nov 25 18:30:36 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:30:36 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:30:36 np0005535838 systemd[1]: Starting Ceph OSD losetup...
Nov 25 18:30:36 np0005535838 bash[73823]: /dev/loop5: [64513]:4327952 (/var/lib/ceph-osd-2.img)
Nov 25 18:30:36 np0005535838 systemd[1]: Finished Ceph OSD losetup.
Nov 25 18:30:36 np0005535838 lvm[73824]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 18:30:36 np0005535838 lvm[73824]: VG ceph_vg2 finished
Nov 25 18:30:38 np0005535838 python3[73848]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:30:40 np0005535838 python3[73941]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 18:30:42 np0005535838 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 18:30:42 np0005535838 systemd[1]: Starting man-db-cache-update.service...
Nov 25 18:30:42 np0005535838 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 18:30:42 np0005535838 systemd[1]: Finished man-db-cache-update.service.
Nov 25 18:30:42 np0005535838 systemd[1]: run-r1c78604c0ff44ca1b8e351090e5dff8d.service: Deactivated successfully.
Nov 25 18:30:42 np0005535838 python3[74052]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 18:30:43 np0005535838 python3[74080]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:30:43 np0005535838 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 18:30:43 np0005535838 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 18:30:44 np0005535838 python3[74144]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:30:44 np0005535838 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 18:30:44 np0005535838 python3[74170]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:30:45 np0005535838 python3[74248]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:30:45 np0005535838 python3[74321]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764113445.2660484-36572-163902209348743/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:30:46 np0005535838 python3[74423]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:30:47 np0005535838 python3[74496]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764113446.525373-36590-62975644289509/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:30:47 np0005535838 python3[74546]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 18:30:48 np0005535838 python3[74574]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 18:30:48 np0005535838 python3[74602]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 18:30:49 np0005535838 python3[74632]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 101922db-575f-58e2-980f-928050464f69 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:30:49 np0005535838 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 18:30:49 np0005535838 systemd-logind[789]: New session 19 of user ceph-admin.
Nov 25 18:30:49 np0005535838 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 18:30:49 np0005535838 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 18:30:49 np0005535838 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 18:30:49 np0005535838 systemd[1]: Starting User Manager for UID 42477...
Nov 25 18:30:49 np0005535838 systemd[74655]: Queued start job for default target Main User Target.
Nov 25 18:30:49 np0005535838 systemd[74655]: Created slice User Application Slice.
Nov 25 18:30:49 np0005535838 systemd[74655]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 18:30:49 np0005535838 systemd[74655]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 18:30:49 np0005535838 systemd[74655]: Reached target Paths.
Nov 25 18:30:49 np0005535838 systemd[74655]: Reached target Timers.
Nov 25 18:30:49 np0005535838 systemd[74655]: Starting D-Bus User Message Bus Socket...
Nov 25 18:30:49 np0005535838 systemd[74655]: Starting Create User's Volatile Files and Directories...
Nov 25 18:30:49 np0005535838 systemd[74655]: Listening on D-Bus User Message Bus Socket.
Nov 25 18:30:49 np0005535838 systemd[74655]: Reached target Sockets.
Nov 25 18:30:49 np0005535838 systemd[74655]: Finished Create User's Volatile Files and Directories.
Nov 25 18:30:49 np0005535838 systemd[74655]: Reached target Basic System.
Nov 25 18:30:49 np0005535838 systemd[74655]: Reached target Main User Target.
Nov 25 18:30:49 np0005535838 systemd[74655]: Startup finished in 153ms.
Nov 25 18:30:49 np0005535838 systemd[1]: Started User Manager for UID 42477.
Nov 25 18:30:49 np0005535838 systemd[1]: Started Session 19 of User ceph-admin.
Nov 25 18:30:49 np0005535838 systemd[1]: session-19.scope: Deactivated successfully.
Nov 25 18:30:49 np0005535838 systemd-logind[789]: Session 19 logged out. Waiting for processes to exit.
Nov 25 18:30:49 np0005535838 systemd-logind[789]: Removed session 19.
Nov 25 18:30:51 np0005535838 systemd[1]: var-lib-containers-storage-overlay-compat2864020147-lower\x2dmapped.mount: Deactivated successfully.
Nov 25 18:30:59 np0005535838 systemd[1]: Stopping User Manager for UID 42477...
Nov 25 18:30:59 np0005535838 systemd[74655]: Activating special unit Exit the Session...
Nov 25 18:30:59 np0005535838 systemd[74655]: Stopped target Main User Target.
Nov 25 18:30:59 np0005535838 systemd[74655]: Stopped target Basic System.
Nov 25 18:30:59 np0005535838 systemd[74655]: Stopped target Paths.
Nov 25 18:30:59 np0005535838 systemd[74655]: Stopped target Sockets.
Nov 25 18:30:59 np0005535838 systemd[74655]: Stopped target Timers.
Nov 25 18:30:59 np0005535838 systemd[74655]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 25 18:30:59 np0005535838 systemd[74655]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 18:30:59 np0005535838 systemd[74655]: Closed D-Bus User Message Bus Socket.
Nov 25 18:30:59 np0005535838 systemd[74655]: Stopped Create User's Volatile Files and Directories.
Nov 25 18:30:59 np0005535838 systemd[74655]: Removed slice User Application Slice.
Nov 25 18:30:59 np0005535838 systemd[74655]: Reached target Shutdown.
Nov 25 18:30:59 np0005535838 systemd[74655]: Finished Exit the Session.
Nov 25 18:30:59 np0005535838 systemd[74655]: Reached target Exit the Session.
Nov 25 18:30:59 np0005535838 systemd[1]: user@42477.service: Deactivated successfully.
Nov 25 18:30:59 np0005535838 systemd[1]: Stopped User Manager for UID 42477.
Nov 25 18:30:59 np0005535838 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 25 18:30:59 np0005535838 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 25 18:30:59 np0005535838 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 25 18:30:59 np0005535838 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 25 18:30:59 np0005535838 systemd[1]: Removed slice User Slice of UID 42477.
Nov 25 18:31:03 np0005535838 podman[74708]: 2025-11-25 23:31:03.085755479 +0000 UTC m=+13.246080392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:03 np0005535838 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 18:31:03 np0005535838 podman[74769]: 2025-11-25 23:31:03.180857648 +0000 UTC m=+0.060922713 container create 35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653 (image=quay.io/ceph/ceph:v18, name=unruffled_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:03 np0005535838 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 25 18:31:03 np0005535838 systemd[1]: Started libpod-conmon-35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653.scope.
Nov 25 18:31:03 np0005535838 podman[74769]: 2025-11-25 23:31:03.153954867 +0000 UTC m=+0.034019942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:03 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:03 np0005535838 podman[74769]: 2025-11-25 23:31:03.296104916 +0000 UTC m=+0.176170021 container init 35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653 (image=quay.io/ceph/ceph:v18, name=unruffled_ganguly, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:31:03 np0005535838 podman[74769]: 2025-11-25 23:31:03.304715117 +0000 UTC m=+0.184780192 container start 35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653 (image=quay.io/ceph/ceph:v18, name=unruffled_ganguly, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:31:03 np0005535838 podman[74769]: 2025-11-25 23:31:03.308770995 +0000 UTC m=+0.188836060 container attach 35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653 (image=quay.io/ceph/ceph:v18, name=unruffled_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 18:31:03 np0005535838 unruffled_ganguly[74786]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 25 18:31:03 np0005535838 systemd[1]: libpod-35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653.scope: Deactivated successfully.
Nov 25 18:31:03 np0005535838 podman[74769]: 2025-11-25 23:31:03.585017339 +0000 UTC m=+0.465082444 container died 35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653 (image=quay.io/ceph/ceph:v18, name=unruffled_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:31:03 np0005535838 systemd[1]: var-lib-containers-storage-overlay-1a19102f429c23896fb48842039f5ef1057cc31e3a6d81d406dab182aac5b96a-merged.mount: Deactivated successfully.
Nov 25 18:31:03 np0005535838 podman[74769]: 2025-11-25 23:31:03.637652159 +0000 UTC m=+0.517717194 container remove 35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653 (image=quay.io/ceph/ceph:v18, name=unruffled_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 18:31:03 np0005535838 systemd[1]: libpod-conmon-35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653.scope: Deactivated successfully.
Nov 25 18:31:03 np0005535838 podman[74804]: 2025-11-25 23:31:03.720665184 +0000 UTC m=+0.059295201 container create eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0 (image=quay.io/ceph/ceph:v18, name=crazy_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 18:31:03 np0005535838 systemd[1]: Started libpod-conmon-eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0.scope.
Nov 25 18:31:03 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:03 np0005535838 podman[74804]: 2025-11-25 23:31:03.693424683 +0000 UTC m=+0.032054740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:03 np0005535838 podman[74804]: 2025-11-25 23:31:03.799229069 +0000 UTC m=+0.137859076 container init eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0 (image=quay.io/ceph/ceph:v18, name=crazy_bell, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:03 np0005535838 podman[74804]: 2025-11-25 23:31:03.806559365 +0000 UTC m=+0.145189342 container start eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0 (image=quay.io/ceph/ceph:v18, name=crazy_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 25 18:31:03 np0005535838 podman[74804]: 2025-11-25 23:31:03.81010755 +0000 UTC m=+0.148737527 container attach eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0 (image=quay.io/ceph/ceph:v18, name=crazy_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:03 np0005535838 crazy_bell[74821]: 167 167
Nov 25 18:31:03 np0005535838 systemd[1]: libpod-eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0.scope: Deactivated successfully.
Nov 25 18:31:03 np0005535838 podman[74804]: 2025-11-25 23:31:03.811526679 +0000 UTC m=+0.150156656 container died eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0 (image=quay.io/ceph/ceph:v18, name=crazy_bell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 18:31:03 np0005535838 podman[74804]: 2025-11-25 23:31:03.848857049 +0000 UTC m=+0.187487026 container remove eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0 (image=quay.io/ceph/ceph:v18, name=crazy_bell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 18:31:03 np0005535838 systemd[1]: libpod-conmon-eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0.scope: Deactivated successfully.
Nov 25 18:31:03 np0005535838 podman[74836]: 2025-11-25 23:31:03.911150588 +0000 UTC m=+0.043012243 container create a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016 (image=quay.io/ceph/ceph:v18, name=suspicious_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:31:03 np0005535838 systemd[1]: Started libpod-conmon-a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016.scope.
Nov 25 18:31:03 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:03 np0005535838 podman[74836]: 2025-11-25 23:31:03.97205715 +0000 UTC m=+0.103918815 container init a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016 (image=quay.io/ceph/ceph:v18, name=suspicious_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 18:31:03 np0005535838 podman[74836]: 2025-11-25 23:31:03.98096892 +0000 UTC m=+0.112830575 container start a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016 (image=quay.io/ceph/ceph:v18, name=suspicious_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:31:03 np0005535838 podman[74836]: 2025-11-25 23:31:03.984662668 +0000 UTC m=+0.116524393 container attach a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016 (image=quay.io/ceph/ceph:v18, name=suspicious_carver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:31:03 np0005535838 podman[74836]: 2025-11-25 23:31:03.89292925 +0000 UTC m=+0.024790885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:04 np0005535838 suspicious_carver[74853]: AQA4PCZpLy7mABAAbQPwB+Zx7UwVIZajRo9+2Q==
Nov 25 18:31:04 np0005535838 systemd[1]: libpod-a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016.scope: Deactivated successfully.
Nov 25 18:31:04 np0005535838 podman[74836]: 2025-11-25 23:31:04.020253302 +0000 UTC m=+0.152115017 container died a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016 (image=quay.io/ceph/ceph:v18, name=suspicious_carver, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:04 np0005535838 podman[74836]: 2025-11-25 23:31:04.062621887 +0000 UTC m=+0.194483512 container remove a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016 (image=quay.io/ceph/ceph:v18, name=suspicious_carver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 18:31:04 np0005535838 systemd[1]: libpod-conmon-a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016.scope: Deactivated successfully.
Nov 25 18:31:04 np0005535838 podman[74872]: 2025-11-25 23:31:04.159924484 +0000 UTC m=+0.064835008 container create ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08 (image=quay.io/ceph/ceph:v18, name=nice_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:31:04 np0005535838 systemd[1]: Started libpod-conmon-ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08.scope.
Nov 25 18:31:04 np0005535838 podman[74872]: 2025-11-25 23:31:04.132472309 +0000 UTC m=+0.037382893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:04 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:04 np0005535838 podman[74872]: 2025-11-25 23:31:04.250713787 +0000 UTC m=+0.155624311 container init ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08 (image=quay.io/ceph/ceph:v18, name=nice_spence, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 18:31:04 np0005535838 podman[74872]: 2025-11-25 23:31:04.260302235 +0000 UTC m=+0.165212749 container start ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08 (image=quay.io/ceph/ceph:v18, name=nice_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 18:31:04 np0005535838 podman[74872]: 2025-11-25 23:31:04.264015374 +0000 UTC m=+0.168925888 container attach ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08 (image=quay.io/ceph/ceph:v18, name=nice_spence, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:31:04 np0005535838 nice_spence[74888]: AQA4PCZpGKbZERAAlOt17UStSd5ILhCCM/ssoA==
Nov 25 18:31:04 np0005535838 systemd[1]: libpod-ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08.scope: Deactivated successfully.
Nov 25 18:31:04 np0005535838 podman[74872]: 2025-11-25 23:31:04.306534623 +0000 UTC m=+0.211445167 container died ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08 (image=quay.io/ceph/ceph:v18, name=nice_spence, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 18:31:04 np0005535838 systemd[1]: var-lib-containers-storage-overlay-5f4d3dbb742eb3e0577242299510165100aaca0f82e808700cb8a45a73c10393-merged.mount: Deactivated successfully.
Nov 25 18:31:04 np0005535838 podman[74872]: 2025-11-25 23:31:04.345377215 +0000 UTC m=+0.250287739 container remove ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08 (image=quay.io/ceph/ceph:v18, name=nice_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 18:31:04 np0005535838 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 18:31:04 np0005535838 systemd[1]: libpod-conmon-ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08.scope: Deactivated successfully.
Nov 25 18:31:04 np0005535838 podman[74908]: 2025-11-25 23:31:04.433357462 +0000 UTC m=+0.059766573 container create 6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2 (image=quay.io/ceph/ceph:v18, name=nice_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 18:31:04 np0005535838 podman[74908]: 2025-11-25 23:31:04.402009802 +0000 UTC m=+0.028418963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:04 np0005535838 systemd[1]: Started libpod-conmon-6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2.scope.
Nov 25 18:31:04 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:04 np0005535838 podman[74908]: 2025-11-25 23:31:04.588443557 +0000 UTC m=+0.214852668 container init 6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2 (image=quay.io/ceph/ceph:v18, name=nice_hermann, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:04 np0005535838 podman[74908]: 2025-11-25 23:31:04.594373177 +0000 UTC m=+0.220782288 container start 6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2 (image=quay.io/ceph/ceph:v18, name=nice_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 18:31:04 np0005535838 podman[74908]: 2025-11-25 23:31:04.598190489 +0000 UTC m=+0.224599610 container attach 6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2 (image=quay.io/ceph/ceph:v18, name=nice_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:31:04 np0005535838 nice_hermann[74924]: AQA4PCZpzCmqJRAA3Tpjw8N/cUTVm1M6XMaoAw==
Nov 25 18:31:04 np0005535838 systemd[1]: libpod-6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2.scope: Deactivated successfully.
Nov 25 18:31:04 np0005535838 podman[74908]: 2025-11-25 23:31:04.637541723 +0000 UTC m=+0.263950834 container died 6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2 (image=quay.io/ceph/ceph:v18, name=nice_hermann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 18:31:04 np0005535838 podman[74908]: 2025-11-25 23:31:04.682713924 +0000 UTC m=+0.309123035 container remove 6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2 (image=quay.io/ceph/ceph:v18, name=nice_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:31:04 np0005535838 systemd[1]: libpod-conmon-6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2.scope: Deactivated successfully.
Nov 25 18:31:04 np0005535838 podman[74945]: 2025-11-25 23:31:04.764349562 +0000 UTC m=+0.054639705 container create 1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25 (image=quay.io/ceph/ceph:v18, name=jolly_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:31:04 np0005535838 systemd[1]: Started libpod-conmon-1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25.scope.
Nov 25 18:31:04 np0005535838 podman[74945]: 2025-11-25 23:31:04.736838944 +0000 UTC m=+0.027129157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:04 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:04 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7466d65fc3a495166840401897c0c545e94e433ba236b835695e8a77ab7bb3fb/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:04 np0005535838 podman[74945]: 2025-11-25 23:31:04.856485711 +0000 UTC m=+0.146775854 container init 1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25 (image=quay.io/ceph/ceph:v18, name=jolly_mendel, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:31:04 np0005535838 podman[74945]: 2025-11-25 23:31:04.865686097 +0000 UTC m=+0.155976270 container start 1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25 (image=quay.io/ceph/ceph:v18, name=jolly_mendel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 18:31:04 np0005535838 podman[74945]: 2025-11-25 23:31:04.870005643 +0000 UTC m=+0.160295826 container attach 1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25 (image=quay.io/ceph/ceph:v18, name=jolly_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 18:31:04 np0005535838 jolly_mendel[74962]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 25 18:31:04 np0005535838 jolly_mendel[74962]: setting min_mon_release = pacific
Nov 25 18:31:04 np0005535838 jolly_mendel[74962]: /usr/bin/monmaptool: set fsid to 101922db-575f-58e2-980f-928050464f69
Nov 25 18:31:04 np0005535838 jolly_mendel[74962]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 25 18:31:04 np0005535838 systemd[1]: libpod-1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25.scope: Deactivated successfully.
Nov 25 18:31:04 np0005535838 podman[74945]: 2025-11-25 23:31:04.911006892 +0000 UTC m=+0.201297045 container died 1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25 (image=quay.io/ceph/ceph:v18, name=jolly_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:04 np0005535838 podman[74945]: 2025-11-25 23:31:04.949707948 +0000 UTC m=+0.239998091 container remove 1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25 (image=quay.io/ceph/ceph:v18, name=jolly_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 18:31:04 np0005535838 systemd[1]: libpod-conmon-1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25.scope: Deactivated successfully.
Nov 25 18:31:05 np0005535838 podman[74982]: 2025-11-25 23:31:05.023950368 +0000 UTC m=+0.054231624 container create 89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a (image=quay.io/ceph/ceph:v18, name=gifted_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:05 np0005535838 systemd[1]: Started libpod-conmon-89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a.scope.
Nov 25 18:31:05 np0005535838 podman[74982]: 2025-11-25 23:31:04.995018613 +0000 UTC m=+0.025299929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:05 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d939591c70eb84b12bdeb076b72adb3084dbade29d38f1d12ed6f7a4b19df4be/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d939591c70eb84b12bdeb076b72adb3084dbade29d38f1d12ed6f7a4b19df4be/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d939591c70eb84b12bdeb076b72adb3084dbade29d38f1d12ed6f7a4b19df4be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d939591c70eb84b12bdeb076b72adb3084dbade29d38f1d12ed6f7a4b19df4be/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:05 np0005535838 podman[74982]: 2025-11-25 23:31:05.112670696 +0000 UTC m=+0.142951922 container init 89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a (image=quay.io/ceph/ceph:v18, name=gifted_mcnulty, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:31:05 np0005535838 podman[74982]: 2025-11-25 23:31:05.124986796 +0000 UTC m=+0.155268012 container start 89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a (image=quay.io/ceph/ceph:v18, name=gifted_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:05 np0005535838 podman[74982]: 2025-11-25 23:31:05.127866453 +0000 UTC m=+0.158147669 container attach 89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a (image=quay.io/ceph/ceph:v18, name=gifted_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:31:05 np0005535838 systemd[1]: libpod-89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a.scope: Deactivated successfully.
Nov 25 18:31:05 np0005535838 podman[75024]: 2025-11-25 23:31:05.239973408 +0000 UTC m=+0.017347227 container died 89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a (image=quay.io/ceph/ceph:v18, name=gifted_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 18:31:05 np0005535838 systemd[1]: var-lib-containers-storage-overlay-d939591c70eb84b12bdeb076b72adb3084dbade29d38f1d12ed6f7a4b19df4be-merged.mount: Deactivated successfully.
Nov 25 18:31:05 np0005535838 podman[75024]: 2025-11-25 23:31:05.271627915 +0000 UTC m=+0.049001734 container remove 89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a (image=quay.io/ceph/ceph:v18, name=gifted_mcnulty, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 18:31:05 np0005535838 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 18:31:05 np0005535838 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 18:31:05 np0005535838 systemd[1]: libpod-conmon-89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a.scope: Deactivated successfully.
Nov 25 18:31:05 np0005535838 systemd[1]: Reloading.
Nov 25 18:31:05 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:31:05 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:31:05 np0005535838 systemd[1]: Reloading.
Nov 25 18:31:05 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:31:05 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:31:05 np0005535838 systemd[1]: Reached target All Ceph clusters and services.
Nov 25 18:31:05 np0005535838 systemd[1]: Reloading.
Nov 25 18:31:05 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:31:05 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:31:06 np0005535838 systemd[1]: Reached target Ceph cluster 101922db-575f-58e2-980f-928050464f69.
Nov 25 18:31:06 np0005535838 systemd[1]: Reloading.
Nov 25 18:31:06 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:31:06 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:31:06 np0005535838 systemd[1]: Reloading.
Nov 25 18:31:06 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:31:06 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:31:06 np0005535838 systemd[1]: Created slice Slice /system/ceph-101922db-575f-58e2-980f-928050464f69.
Nov 25 18:31:06 np0005535838 systemd[1]: Reached target System Time Set.
Nov 25 18:31:06 np0005535838 systemd[1]: Reached target System Time Synchronized.
Nov 25 18:31:06 np0005535838 systemd[1]: Starting Ceph mon.compute-0 for 101922db-575f-58e2-980f-928050464f69...
Nov 25 18:31:06 np0005535838 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 18:31:06 np0005535838 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 18:31:06 np0005535838 podman[75278]: 2025-11-25 23:31:06.968970111 +0000 UTC m=+0.073589384 container create ac0a24a6c545ea158fe0e3831004243e4b4c11430a272257b058329d471122c7 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:07 np0005535838 podman[75278]: 2025-11-25 23:31:06.939028728 +0000 UTC m=+0.043648041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577faa4070015ce3b56f460840662cc6473957b4727542d7db74cfa9c3a73012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577faa4070015ce3b56f460840662cc6473957b4727542d7db74cfa9c3a73012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577faa4070015ce3b56f460840662cc6473957b4727542d7db74cfa9c3a73012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577faa4070015ce3b56f460840662cc6473957b4727542d7db74cfa9c3a73012/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:07 np0005535838 podman[75278]: 2025-11-25 23:31:07.06447641 +0000 UTC m=+0.169095713 container init ac0a24a6c545ea158fe0e3831004243e4b4c11430a272257b058329d471122c7 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 18:31:07 np0005535838 podman[75278]: 2025-11-25 23:31:07.08204715 +0000 UTC m=+0.186666423 container start ac0a24a6c545ea158fe0e3831004243e4b4c11430a272257b058329d471122c7 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:31:07 np0005535838 bash[75278]: ac0a24a6c545ea158fe0e3831004243e4b4c11430a272257b058329d471122c7
Nov 25 18:31:07 np0005535838 systemd[1]: Started Ceph mon.compute-0 for 101922db-575f-58e2-980f-928050464f69.
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: pidfile_write: ignore empty --pid-file
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: load: jerasure load: lrc 
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: RocksDB version: 7.9.2
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Git sha 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: DB SUMMARY
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: DB Session ID:  UB9NOW7HEWWESFP4TBUB
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: CURRENT file:  CURRENT
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                         Options.error_if_exists: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                       Options.create_if_missing: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                                     Options.env: 0x55c0809eac40
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                                Options.info_log: 0x55c08216ae80
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                              Options.statistics: (nil)
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                               Options.use_fsync: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                              Options.db_log_dir: 
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                                 Options.wal_dir: 
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                    Options.write_buffer_manager: 0x55c08217ab40
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                  Options.unordered_write: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                               Options.row_cache: None
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                              Options.wal_filter: None
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.two_write_queues: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.wal_compression: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.atomic_flush: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.max_background_jobs: 2
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.max_background_compactions: -1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.max_subcompactions: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.max_total_wal_size: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                          Options.max_open_files: -1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:       Options.compaction_readahead_size: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Compression algorithms supported:
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: #011kZSTD supported: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: #011kXpressCompression supported: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: #011kZlibCompression supported: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:           Options.merge_operator: 
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:        Options.compaction_filter: None
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c08216aa80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0821631f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:        Options.write_buffer_size: 33554432
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:  Options.max_write_buffer_number: 2
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:          Options.compression: NoCompression
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.num_levels: 7
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: cf57a6b1-796f-4cfa-b350-53eb10a4554d
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113467148059, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113467150312, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "UB9NOW7HEWWESFP4TBUB", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113467150434, "job": 1, "event": "recovery_finished"}
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c08218ce00
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: DB pointer 0x55c082216000
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c0821631f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@-1(???) e0 preinit fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-25T23:31:05.163110Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).mds e1 new map
Nov 25 18:31:07 np0005535838 podman[75299]: 2025-11-25 23:31:07.192225173 +0000 UTC m=+0.060807270 container create 623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636 (image=quay.io/ceph/ceph:v18, name=kind_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: log_channel(cluster) log [DBG] : fsmap 
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mkfs 101922db-575f-58e2-980f-928050464f69
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 18:31:07 np0005535838 systemd[1]: Started libpod-conmon-623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636.scope.
Nov 25 18:31:07 np0005535838 podman[75299]: 2025-11-25 23:31:07.170556793 +0000 UTC m=+0.039138930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:07 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b673d072b5d5713d4e9817711233687a70e52c95ae672a3b0cb9b3ee82541571/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b673d072b5d5713d4e9817711233687a70e52c95ae672a3b0cb9b3ee82541571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b673d072b5d5713d4e9817711233687a70e52c95ae672a3b0cb9b3ee82541571/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:07 np0005535838 podman[75299]: 2025-11-25 23:31:07.278746972 +0000 UTC m=+0.147329079 container init 623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636 (image=quay.io/ceph/ceph:v18, name=kind_pascal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 18:31:07 np0005535838 podman[75299]: 2025-11-25 23:31:07.285969375 +0000 UTC m=+0.154551502 container start 623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636 (image=quay.io/ceph/ceph:v18, name=kind_pascal, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:31:07 np0005535838 podman[75299]: 2025-11-25 23:31:07.289759597 +0000 UTC m=+0.158341684 container attach 623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636 (image=quay.io/ceph/ceph:v18, name=kind_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 18:31:07 np0005535838 ceph-mon[75298]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/242569495' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 18:31:07 np0005535838 kind_pascal[75353]:  cluster:
Nov 25 18:31:07 np0005535838 kind_pascal[75353]:    id:     101922db-575f-58e2-980f-928050464f69
Nov 25 18:31:07 np0005535838 kind_pascal[75353]:    health: HEALTH_OK
Nov 25 18:31:07 np0005535838 kind_pascal[75353]: 
Nov 25 18:31:07 np0005535838 kind_pascal[75353]:  services:
Nov 25 18:31:07 np0005535838 kind_pascal[75353]:    mon: 1 daemons, quorum compute-0 (age 0.472283s)
Nov 25 18:31:07 np0005535838 kind_pascal[75353]:    mgr: no daemons active
Nov 25 18:31:07 np0005535838 kind_pascal[75353]:    osd: 0 osds: 0 up, 0 in
Nov 25 18:31:07 np0005535838 kind_pascal[75353]: 
Nov 25 18:31:07 np0005535838 kind_pascal[75353]:  data:
Nov 25 18:31:07 np0005535838 kind_pascal[75353]:    pools:   0 pools, 0 pgs
Nov 25 18:31:07 np0005535838 kind_pascal[75353]:    objects: 0 objects, 0 B
Nov 25 18:31:07 np0005535838 kind_pascal[75353]:    usage:   0 B used, 0 B / 0 B avail
Nov 25 18:31:07 np0005535838 kind_pascal[75353]:    pgs:     
Nov 25 18:31:07 np0005535838 kind_pascal[75353]: 
Nov 25 18:31:07 np0005535838 systemd[1]: libpod-623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636.scope: Deactivated successfully.
Nov 25 18:31:07 np0005535838 podman[75299]: 2025-11-25 23:31:07.673565942 +0000 UTC m=+0.542148069 container died 623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636 (image=quay.io/ceph/ceph:v18, name=kind_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:31:07 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b673d072b5d5713d4e9817711233687a70e52c95ae672a3b0cb9b3ee82541571-merged.mount: Deactivated successfully.
Nov 25 18:31:07 np0005535838 podman[75299]: 2025-11-25 23:31:07.734278439 +0000 UTC m=+0.602860556 container remove 623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636 (image=quay.io/ceph/ceph:v18, name=kind_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 18:31:07 np0005535838 systemd[1]: libpod-conmon-623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636.scope: Deactivated successfully.
Nov 25 18:31:07 np0005535838 podman[75393]: 2025-11-25 23:31:07.839665622 +0000 UTC m=+0.075555915 container create f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd (image=quay.io/ceph/ceph:v18, name=wizardly_dijkstra, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 18:31:07 np0005535838 systemd[1]: Started libpod-conmon-f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd.scope.
Nov 25 18:31:07 np0005535838 podman[75393]: 2025-11-25 23:31:07.806491134 +0000 UTC m=+0.042381467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:07 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c63e0d9966f76285c57f51ad147370a511265e675e2649be9ed64e8433bbd7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c63e0d9966f76285c57f51ad147370a511265e675e2649be9ed64e8433bbd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c63e0d9966f76285c57f51ad147370a511265e675e2649be9ed64e8433bbd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c63e0d9966f76285c57f51ad147370a511265e675e2649be9ed64e8433bbd7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:07 np0005535838 podman[75393]: 2025-11-25 23:31:07.938534892 +0000 UTC m=+0.174425155 container init f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd (image=quay.io/ceph/ceph:v18, name=wizardly_dijkstra, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:31:07 np0005535838 podman[75393]: 2025-11-25 23:31:07.948845858 +0000 UTC m=+0.184736151 container start f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd (image=quay.io/ceph/ceph:v18, name=wizardly_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:07 np0005535838 podman[75393]: 2025-11-25 23:31:07.952528247 +0000 UTC m=+0.188418550 container attach f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd (image=quay.io/ceph/ceph:v18, name=wizardly_dijkstra, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:08 np0005535838 ceph-mon[75298]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 18:31:08 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 25 18:31:08 np0005535838 ceph-mon[75298]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2041492923' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 18:31:08 np0005535838 ceph-mon[75298]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2041492923' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 18:31:08 np0005535838 wizardly_dijkstra[75409]: 
Nov 25 18:31:08 np0005535838 wizardly_dijkstra[75409]: [global]
Nov 25 18:31:08 np0005535838 wizardly_dijkstra[75409]: #011fsid = 101922db-575f-58e2-980f-928050464f69
Nov 25 18:31:08 np0005535838 wizardly_dijkstra[75409]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 25 18:31:08 np0005535838 wizardly_dijkstra[75409]: #011osd_crush_chooseleaf_type = 0
Nov 25 18:31:08 np0005535838 systemd[1]: libpod-f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd.scope: Deactivated successfully.
Nov 25 18:31:08 np0005535838 podman[75393]: 2025-11-25 23:31:08.353392109 +0000 UTC m=+0.589282372 container died f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd (image=quay.io/ceph/ceph:v18, name=wizardly_dijkstra, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 18:31:08 np0005535838 systemd[1]: var-lib-containers-storage-overlay-a0c63e0d9966f76285c57f51ad147370a511265e675e2649be9ed64e8433bbd7-merged.mount: Deactivated successfully.
Nov 25 18:31:08 np0005535838 podman[75393]: 2025-11-25 23:31:08.405325351 +0000 UTC m=+0.641215604 container remove f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd (image=quay.io/ceph/ceph:v18, name=wizardly_dijkstra, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:08 np0005535838 systemd[1]: libpod-conmon-f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd.scope: Deactivated successfully.
Nov 25 18:31:08 np0005535838 podman[75449]: 2025-11-25 23:31:08.475124791 +0000 UTC m=+0.046739883 container create 2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599 (image=quay.io/ceph/ceph:v18, name=exciting_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 18:31:08 np0005535838 systemd[1]: Started libpod-conmon-2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599.scope.
Nov 25 18:31:08 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:08 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e5d45b259153fccdaf6c0188aef920ca0bc9fe466674d7d5d0c6bb78257653/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:08 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e5d45b259153fccdaf6c0188aef920ca0bc9fe466674d7d5d0c6bb78257653/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:08 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e5d45b259153fccdaf6c0188aef920ca0bc9fe466674d7d5d0c6bb78257653/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:08 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e5d45b259153fccdaf6c0188aef920ca0bc9fe466674d7d5d0c6bb78257653/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:08 np0005535838 podman[75449]: 2025-11-25 23:31:08.450243125 +0000 UTC m=+0.021858257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:08 np0005535838 podman[75449]: 2025-11-25 23:31:08.568161805 +0000 UTC m=+0.139776947 container init 2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599 (image=quay.io/ceph/ceph:v18, name=exciting_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 25 18:31:08 np0005535838 podman[75449]: 2025-11-25 23:31:08.57879328 +0000 UTC m=+0.150408412 container start 2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599 (image=quay.io/ceph/ceph:v18, name=exciting_hawking, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 18:31:08 np0005535838 podman[75449]: 2025-11-25 23:31:08.583309781 +0000 UTC m=+0.154924913 container attach 2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599 (image=quay.io/ceph/ceph:v18, name=exciting_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 18:31:08 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:31:08 np0005535838 ceph-mon[75298]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1124780770' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:31:08 np0005535838 systemd[1]: libpod-2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599.scope: Deactivated successfully.
Nov 25 18:31:09 np0005535838 podman[75491]: 2025-11-25 23:31:09.02510238 +0000 UTC m=+0.024018075 container died 2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599 (image=quay.io/ceph/ceph:v18, name=exciting_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:31:09 np0005535838 systemd[1]: var-lib-containers-storage-overlay-e1e5d45b259153fccdaf6c0188aef920ca0bc9fe466674d7d5d0c6bb78257653-merged.mount: Deactivated successfully.
Nov 25 18:31:09 np0005535838 podman[75491]: 2025-11-25 23:31:09.07660338 +0000 UTC m=+0.075519035 container remove 2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599 (image=quay.io/ceph/ceph:v18, name=exciting_hawking, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:09 np0005535838 systemd[1]: libpod-conmon-2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599.scope: Deactivated successfully.
Nov 25 18:31:09 np0005535838 systemd[1]: Stopping Ceph mon.compute-0 for 101922db-575f-58e2-980f-928050464f69...
Nov 25 18:31:09 np0005535838 ceph-mon[75298]: from='client.? 192.168.122.100:0/2041492923' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 18:31:09 np0005535838 ceph-mon[75298]: from='client.? 192.168.122.100:0/2041492923' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 18:31:09 np0005535838 ceph-mon[75298]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 25 18:31:09 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 25 18:31:09 np0005535838 ceph-mon[75298]: mon.compute-0@0(leader) e1 shutdown
Nov 25 18:31:09 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0[75293]: 2025-11-25T23:31:09.364+0000 7fba1831d640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 25 18:31:09 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0[75293]: 2025-11-25T23:31:09.364+0000 7fba1831d640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 25 18:31:09 np0005535838 ceph-mon[75298]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 18:31:09 np0005535838 ceph-mon[75298]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 18:31:09 np0005535838 podman[75535]: 2025-11-25 23:31:09.426600459 +0000 UTC m=+0.115667901 container died ac0a24a6c545ea158fe0e3831004243e4b4c11430a272257b058329d471122c7 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 18:31:09 np0005535838 systemd[1]: var-lib-containers-storage-overlay-577faa4070015ce3b56f460840662cc6473957b4727542d7db74cfa9c3a73012-merged.mount: Deactivated successfully.
Nov 25 18:31:09 np0005535838 podman[75535]: 2025-11-25 23:31:09.469431066 +0000 UTC m=+0.158498398 container remove ac0a24a6c545ea158fe0e3831004243e4b4c11430a272257b058329d471122c7 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:31:09 np0005535838 bash[75535]: ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0
Nov 25 18:31:09 np0005535838 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 18:31:09 np0005535838 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 18:31:09 np0005535838 systemd[1]: ceph-101922db-575f-58e2-980f-928050464f69@mon.compute-0.service: Deactivated successfully.
Nov 25 18:31:09 np0005535838 systemd[1]: Stopped Ceph mon.compute-0 for 101922db-575f-58e2-980f-928050464f69.
Nov 25 18:31:09 np0005535838 systemd[1]: ceph-101922db-575f-58e2-980f-928050464f69@mon.compute-0.service: Consumed 1.361s CPU time.
Nov 25 18:31:09 np0005535838 systemd[1]: Starting Ceph mon.compute-0 for 101922db-575f-58e2-980f-928050464f69...
Nov 25 18:31:09 np0005535838 podman[75634]: 2025-11-25 23:31:09.98337921 +0000 UTC m=+0.053367952 container create 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:31:10 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c2aa8903f56cf001ed4089ced6ff01949bdc00412c5be2ad4fd203969eb304/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:10 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c2aa8903f56cf001ed4089ced6ff01949bdc00412c5be2ad4fd203969eb304/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:10 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c2aa8903f56cf001ed4089ced6ff01949bdc00412c5be2ad4fd203969eb304/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:10 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c2aa8903f56cf001ed4089ced6ff01949bdc00412c5be2ad4fd203969eb304/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:10 np0005535838 podman[75634]: 2025-11-25 23:31:09.96098675 +0000 UTC m=+0.030975502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:10 np0005535838 podman[75634]: 2025-11-25 23:31:10.062579552 +0000 UTC m=+0.132568344 container init 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 25 18:31:10 np0005535838 podman[75634]: 2025-11-25 23:31:10.078725205 +0000 UTC m=+0.148713957 container start 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:31:10 np0005535838 bash[75634]: 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2
Nov 25 18:31:10 np0005535838 systemd[1]: Started Ceph mon.compute-0 for 101922db-575f-58e2-980f-928050464f69.
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: pidfile_write: ignore empty --pid-file
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: load: jerasure load: lrc 
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: RocksDB version: 7.9.2
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Git sha 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: DB SUMMARY
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: DB Session ID:  Q7VS70283MEZ1V621ZPR
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: CURRENT file:  CURRENT
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55676 ; 
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                         Options.error_if_exists: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                       Options.create_if_missing: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                                     Options.env: 0x55f0eb8aec40
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                                Options.info_log: 0x55f0edccb040
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                              Options.statistics: (nil)
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                               Options.use_fsync: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                              Options.db_log_dir: 
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                                 Options.wal_dir: 
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                    Options.write_buffer_manager: 0x55f0edcdab40
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                  Options.unordered_write: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                               Options.row_cache: None
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                              Options.wal_filter: None
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.two_write_queues: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.wal_compression: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.atomic_flush: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.max_background_jobs: 2
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.max_background_compactions: -1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.max_subcompactions: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.max_total_wal_size: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                          Options.max_open_files: -1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:       Options.compaction_readahead_size: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Compression algorithms supported:
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: #011kZSTD supported: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: #011kXpressCompression supported: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: #011kZlibCompression supported: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:           Options.merge_operator: 
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:        Options.compaction_filter: None
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0edccac40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0edcc31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:        Options.write_buffer_size: 33554432
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:  Options.max_write_buffer_number: 2
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:          Options.compression: NoCompression
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.num_levels: 7
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: cf57a6b1-796f-4cfa-b350-53eb10a4554d
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113470128961, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113470132223, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53797, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51386, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113470, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113470132346, "job": 1, "event": "recovery_finished"}
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f0edcece00
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: DB pointer 0x55f0edd76000
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.8      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.8      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.8      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.8      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.91 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.91 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f0edcc31f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: mon.compute-0@-1(???) e1 preinit fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: mon.compute-0@-1(???).mds e1 new map
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : fsmap 
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 25 18:31:10 np0005535838 podman[75655]: 2025-11-25 23:31:10.190925642 +0000 UTC m=+0.065460136 container create d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373 (image=quay.io/ceph/ceph:v18, name=optimistic_bhaskara, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 18:31:10 np0005535838 systemd[1]: Started libpod-conmon-d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373.scope.
Nov 25 18:31:10 np0005535838 podman[75655]: 2025-11-25 23:31:10.16625693 +0000 UTC m=+0.040791474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:10 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:10 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6547807f31376b062c8c38efcb0448febf02c8e14ba702a49d8b2b8cc1dbd544/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:10 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6547807f31376b062c8c38efcb0448febf02c8e14ba702a49d8b2b8cc1dbd544/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:10 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6547807f31376b062c8c38efcb0448febf02c8e14ba702a49d8b2b8cc1dbd544/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:10 np0005535838 podman[75655]: 2025-11-25 23:31:10.295675149 +0000 UTC m=+0.170209703 container init d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373 (image=quay.io/ceph/ceph:v18, name=optimistic_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:10 np0005535838 podman[75655]: 2025-11-25 23:31:10.306218711 +0000 UTC m=+0.180753205 container start d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373 (image=quay.io/ceph/ceph:v18, name=optimistic_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:31:10 np0005535838 podman[75655]: 2025-11-25 23:31:10.30954317 +0000 UTC m=+0.184077734 container attach d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373 (image=quay.io/ceph/ceph:v18, name=optimistic_bhaskara, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:31:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 25 18:31:10 np0005535838 systemd[1]: libpod-d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373.scope: Deactivated successfully.
Nov 25 18:31:10 np0005535838 podman[75655]: 2025-11-25 23:31:10.724697275 +0000 UTC m=+0.599231769 container died d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373 (image=quay.io/ceph/ceph:v18, name=optimistic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:31:10 np0005535838 systemd[1]: var-lib-containers-storage-overlay-6547807f31376b062c8c38efcb0448febf02c8e14ba702a49d8b2b8cc1dbd544-merged.mount: Deactivated successfully.
Nov 25 18:31:10 np0005535838 podman[75655]: 2025-11-25 23:31:10.779263197 +0000 UTC m=+0.653797691 container remove d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373 (image=quay.io/ceph/ceph:v18, name=optimistic_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 18:31:10 np0005535838 systemd[1]: libpod-conmon-d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373.scope: Deactivated successfully.
Nov 25 18:31:10 np0005535838 podman[75748]: 2025-11-25 23:31:10.875089685 +0000 UTC m=+0.063947534 container create 8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20 (image=quay.io/ceph/ceph:v18, name=awesome_dubinsky, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:31:10 np0005535838 systemd[1]: Started libpod-conmon-8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20.scope.
Nov 25 18:31:10 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:10 np0005535838 podman[75748]: 2025-11-25 23:31:10.848903384 +0000 UTC m=+0.037761273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:10 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff988a36679ac1dda85b5ec70e2e5270f9539f7e2682a19bff82a26fc118d9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:10 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff988a36679ac1dda85b5ec70e2e5270f9539f7e2682a19bff82a26fc118d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:10 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff988a36679ac1dda85b5ec70e2e5270f9539f7e2682a19bff82a26fc118d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:10 np0005535838 podman[75748]: 2025-11-25 23:31:10.965051996 +0000 UTC m=+0.153909835 container init 8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20 (image=quay.io/ceph/ceph:v18, name=awesome_dubinsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 18:31:10 np0005535838 podman[75748]: 2025-11-25 23:31:10.97749208 +0000 UTC m=+0.166349919 container start 8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20 (image=quay.io/ceph/ceph:v18, name=awesome_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:31:10 np0005535838 podman[75748]: 2025-11-25 23:31:10.981680592 +0000 UTC m=+0.170538441 container attach 8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20 (image=quay.io/ceph/ceph:v18, name=awesome_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 18:31:11 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 25 18:31:11 np0005535838 systemd[1]: libpod-8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20.scope: Deactivated successfully.
Nov 25 18:31:11 np0005535838 conmon[75764]: conmon 8912c836b11621119281 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20.scope/container/memory.events
Nov 25 18:31:11 np0005535838 podman[75790]: 2025-11-25 23:31:11.464097539 +0000 UTC m=+0.039558591 container died 8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20 (image=quay.io/ceph/ceph:v18, name=awesome_dubinsky, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 25 18:31:11 np0005535838 systemd[1]: var-lib-containers-storage-overlay-a2ff988a36679ac1dda85b5ec70e2e5270f9539f7e2682a19bff82a26fc118d9-merged.mount: Deactivated successfully.
Nov 25 18:31:11 np0005535838 podman[75790]: 2025-11-25 23:31:11.507368399 +0000 UTC m=+0.082829411 container remove 8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20 (image=quay.io/ceph/ceph:v18, name=awesome_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:11 np0005535838 systemd[1]: libpod-conmon-8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20.scope: Deactivated successfully.
Nov 25 18:31:11 np0005535838 systemd[1]: Reloading.
Nov 25 18:31:11 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:31:11 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:31:11 np0005535838 systemd[1]: Reloading.
Nov 25 18:31:11 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:31:11 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:31:12 np0005535838 systemd[1]: Starting Ceph mgr.compute-0.gwqfsl for 101922db-575f-58e2-980f-928050464f69...
Nov 25 18:31:12 np0005535838 podman[75934]: 2025-11-25 23:31:12.530565238 +0000 UTC m=+0.071277331 container create cb17cd0be6b6c5a9b97ff9dc5584e4a27121c5d35ae63d1e3300d59246c81be2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:31:12 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac7215356bd862bcf51092d9c1f43e2b2864a20a35f4a69cadddfad036631b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:12 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac7215356bd862bcf51092d9c1f43e2b2864a20a35f4a69cadddfad036631b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:12 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac7215356bd862bcf51092d9c1f43e2b2864a20a35f4a69cadddfad036631b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:12 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac7215356bd862bcf51092d9c1f43e2b2864a20a35f4a69cadddfad036631b0/merged/var/lib/ceph/mgr/ceph-compute-0.gwqfsl supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:12 np0005535838 podman[75934]: 2025-11-25 23:31:12.501434528 +0000 UTC m=+0.042146661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:12 np0005535838 podman[75934]: 2025-11-25 23:31:12.611661932 +0000 UTC m=+0.152374055 container init cb17cd0be6b6c5a9b97ff9dc5584e4a27121c5d35ae63d1e3300d59246c81be2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 18:31:12 np0005535838 podman[75934]: 2025-11-25 23:31:12.623005516 +0000 UTC m=+0.163717599 container start cb17cd0be6b6c5a9b97ff9dc5584e4a27121c5d35ae63d1e3300d59246c81be2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 18:31:12 np0005535838 bash[75934]: cb17cd0be6b6c5a9b97ff9dc5584e4a27121c5d35ae63d1e3300d59246c81be2
Nov 25 18:31:12 np0005535838 systemd[1]: Started Ceph mgr.compute-0.gwqfsl for 101922db-575f-58e2-980f-928050464f69.
Nov 25 18:31:12 np0005535838 ceph-mgr[75954]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 18:31:12 np0005535838 ceph-mgr[75954]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 18:31:12 np0005535838 ceph-mgr[75954]: pidfile_write: ignore empty --pid-file
Nov 25 18:31:12 np0005535838 podman[75955]: 2025-11-25 23:31:12.726880359 +0000 UTC m=+0.058558310 container create f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:12 np0005535838 systemd[1]: Started libpod-conmon-f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff.scope.
Nov 25 18:31:12 np0005535838 podman[75955]: 2025-11-25 23:31:12.697167773 +0000 UTC m=+0.028845734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:12 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:12 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756f4a954c25588889f994d4d2e19c4baf1f0a0530e93c4ae7d1788e473981f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:12 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756f4a954c25588889f994d4d2e19c4baf1f0a0530e93c4ae7d1788e473981f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:12 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756f4a954c25588889f994d4d2e19c4baf1f0a0530e93c4ae7d1788e473981f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:12 np0005535838 podman[75955]: 2025-11-25 23:31:12.821782002 +0000 UTC m=+0.153459923 container init f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 25 18:31:12 np0005535838 podman[75955]: 2025-11-25 23:31:12.833290181 +0000 UTC m=+0.164968132 container start f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 18:31:12 np0005535838 podman[75955]: 2025-11-25 23:31:12.837635487 +0000 UTC m=+0.169313478 container attach f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 18:31:12 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'alerts'
Nov 25 18:31:13 np0005535838 ceph-mgr[75954]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 18:31:13 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'balancer'
Nov 25 18:31:13 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:13.157+0000 7f0aaebe6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 18:31:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 18:31:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1216517223' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]: 
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]: {
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    "health": {
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "status": "HEALTH_OK",
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "checks": {},
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "mutes": []
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    },
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    "election_epoch": 5,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    "quorum": [
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        0
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    ],
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    "quorum_names": [
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "compute-0"
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    ],
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    "quorum_age": 3,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    "monmap": {
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "epoch": 1,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "min_mon_release_name": "reef",
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "num_mons": 1
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    },
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    "osdmap": {
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "epoch": 1,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "num_osds": 0,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "num_up_osds": 0,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "osd_up_since": 0,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "num_in_osds": 0,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "osd_in_since": 0,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "num_remapped_pgs": 0
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    },
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    "pgmap": {
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "pgs_by_state": [],
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "num_pgs": 0,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "num_pools": 0,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "num_objects": 0,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "data_bytes": 0,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "bytes_used": 0,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "bytes_avail": 0,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "bytes_total": 0
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    },
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    "fsmap": {
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "epoch": 1,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "by_rank": [],
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "up:standby": 0
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    },
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    "mgrmap": {
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "available": false,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "num_standbys": 0,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "modules": [
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:            "iostat",
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:            "nfs",
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:            "restful"
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        ],
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "services": {}
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    },
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    "servicemap": {
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "epoch": 1,
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:        "services": {}
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    },
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]:    "progress_events": {}
Nov 25 18:31:13 np0005535838 vigilant_proskuriakova[75994]: }
Nov 25 18:31:13 np0005535838 systemd[1]: libpod-f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff.scope: Deactivated successfully.
Nov 25 18:31:13 np0005535838 podman[75955]: 2025-11-25 23:31:13.249468623 +0000 UTC m=+0.581146574 container died f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:13 np0005535838 systemd[1]: var-lib-containers-storage-overlay-9756f4a954c25588889f994d4d2e19c4baf1f0a0530e93c4ae7d1788e473981f-merged.mount: Deactivated successfully.
Nov 25 18:31:13 np0005535838 podman[75955]: 2025-11-25 23:31:13.303933003 +0000 UTC m=+0.635610914 container remove f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 18:31:13 np0005535838 systemd[1]: libpod-conmon-f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff.scope: Deactivated successfully.
Nov 25 18:31:13 np0005535838 ceph-mgr[75954]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 18:31:13 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'cephadm'
Nov 25 18:31:13 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:13.461+0000 7f0aaebe6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 18:31:15 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'crash'
Nov 25 18:31:15 np0005535838 podman[76042]: 2025-11-25 23:31:15.380273154 +0000 UTC m=+0.048329286 container create e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6 (image=quay.io/ceph/ceph:v18, name=distracted_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 18:31:15 np0005535838 systemd[1]: Started libpod-conmon-e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6.scope.
Nov 25 18:31:15 np0005535838 podman[76042]: 2025-11-25 23:31:15.358893111 +0000 UTC m=+0.026949233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:15 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:15 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f82046b61787684eb825545a885c89e630acc89315cc87f8740c098c5f357f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:15 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f82046b61787684eb825545a885c89e630acc89315cc87f8740c098c5f357f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:15 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f82046b61787684eb825545a885c89e630acc89315cc87f8740c098c5f357f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:15 np0005535838 podman[76042]: 2025-11-25 23:31:15.480914431 +0000 UTC m=+0.148970603 container init e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6 (image=quay.io/ceph/ceph:v18, name=distracted_jones, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:15 np0005535838 podman[76042]: 2025-11-25 23:31:15.494425753 +0000 UTC m=+0.162481865 container start e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6 (image=quay.io/ceph/ceph:v18, name=distracted_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:31:15 np0005535838 podman[76042]: 2025-11-25 23:31:15.498422839 +0000 UTC m=+0.166478981 container attach e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6 (image=quay.io/ceph/ceph:v18, name=distracted_jones, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:31:15 np0005535838 ceph-mgr[75954]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 18:31:15 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'dashboard'
Nov 25 18:31:15 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:15.545+0000 7f0aaebe6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 18:31:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 18:31:15 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2985170876' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 18:31:15 np0005535838 distracted_jones[76058]: 
Nov 25 18:31:15 np0005535838 distracted_jones[76058]: {
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    "health": {
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "status": "HEALTH_OK",
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "checks": {},
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "mutes": []
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    },
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    "election_epoch": 5,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    "quorum": [
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        0
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    ],
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    "quorum_names": [
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "compute-0"
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    ],
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    "quorum_age": 5,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    "monmap": {
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "epoch": 1,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "min_mon_release_name": "reef",
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "num_mons": 1
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    },
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    "osdmap": {
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "epoch": 1,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "num_osds": 0,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "num_up_osds": 0,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "osd_up_since": 0,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "num_in_osds": 0,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "osd_in_since": 0,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "num_remapped_pgs": 0
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    },
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    "pgmap": {
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "pgs_by_state": [],
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "num_pgs": 0,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "num_pools": 0,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "num_objects": 0,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "data_bytes": 0,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "bytes_used": 0,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "bytes_avail": 0,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "bytes_total": 0
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    },
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    "fsmap": {
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "epoch": 1,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "by_rank": [],
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "up:standby": 0
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    },
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    "mgrmap": {
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "available": false,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "num_standbys": 0,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "modules": [
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:            "iostat",
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:            "nfs",
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:            "restful"
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        ],
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "services": {}
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    },
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    "servicemap": {
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "epoch": 1,
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:        "services": {}
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    },
Nov 25 18:31:15 np0005535838 distracted_jones[76058]:    "progress_events": {}
Nov 25 18:31:15 np0005535838 distracted_jones[76058]: }
Nov 25 18:31:15 np0005535838 systemd[1]: libpod-e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6.scope: Deactivated successfully.
Nov 25 18:31:15 np0005535838 podman[76042]: 2025-11-25 23:31:15.901896582 +0000 UTC m=+0.569952694 container died e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6 (image=quay.io/ceph/ceph:v18, name=distracted_jones, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:31:15 np0005535838 systemd[1]: var-lib-containers-storage-overlay-70f82046b61787684eb825545a885c89e630acc89315cc87f8740c098c5f357f-merged.mount: Deactivated successfully.
Nov 25 18:31:15 np0005535838 podman[76042]: 2025-11-25 23:31:15.944911174 +0000 UTC m=+0.612967276 container remove e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6 (image=quay.io/ceph/ceph:v18, name=distracted_jones, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:31:15 np0005535838 systemd[1]: libpod-conmon-e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6.scope: Deactivated successfully.
Nov 25 18:31:16 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'devicehealth'
Nov 25 18:31:17 np0005535838 ceph-mgr[75954]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 18:31:17 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 18:31:17 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:17.165+0000 7f0aaebe6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 18:31:17 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 18:31:17 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 18:31:17 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]:  from numpy import show_config as show_numpy_config
Nov 25 18:31:17 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:17.658+0000 7f0aaebe6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 18:31:17 np0005535838 ceph-mgr[75954]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 18:31:17 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'influx'
Nov 25 18:31:17 np0005535838 ceph-mgr[75954]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 18:31:17 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'insights'
Nov 25 18:31:17 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:17.880+0000 7f0aaebe6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 18:31:18 np0005535838 podman[76097]: 2025-11-25 23:31:18.057723673 +0000 UTC m=+0.079275465 container create e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868 (image=quay.io/ceph/ceph:v18, name=festive_goodall, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 18:31:18 np0005535838 systemd[1]: Started libpod-conmon-e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868.scope.
Nov 25 18:31:18 np0005535838 podman[76097]: 2025-11-25 23:31:18.023124246 +0000 UTC m=+0.044676058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:18 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'iostat'
Nov 25 18:31:18 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:18 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab09c449ced8a14e05233d1a2394f3e0453dc1bd9ead6340eca2d2d580921806/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:18 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab09c449ced8a14e05233d1a2394f3e0453dc1bd9ead6340eca2d2d580921806/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:18 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab09c449ced8a14e05233d1a2394f3e0453dc1bd9ead6340eca2d2d580921806/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:18 np0005535838 podman[76097]: 2025-11-25 23:31:18.163539779 +0000 UTC m=+0.185091631 container init e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868 (image=quay.io/ceph/ceph:v18, name=festive_goodall, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:18 np0005535838 podman[76097]: 2025-11-25 23:31:18.173196568 +0000 UTC m=+0.194748360 container start e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868 (image=quay.io/ceph/ceph:v18, name=festive_goodall, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 18:31:18 np0005535838 podman[76097]: 2025-11-25 23:31:18.176816304 +0000 UTC m=+0.198368106 container attach e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868 (image=quay.io/ceph/ceph:v18, name=festive_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 18:31:18 np0005535838 ceph-mgr[75954]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 18:31:18 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'k8sevents'
Nov 25 18:31:18 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:18.353+0000 7f0aaebe6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 18:31:18 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 18:31:18 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2271446657' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 18:31:18 np0005535838 festive_goodall[76113]: 
Nov 25 18:31:18 np0005535838 festive_goodall[76113]: {
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    "health": {
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "status": "HEALTH_OK",
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "checks": {},
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "mutes": []
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    },
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    "election_epoch": 5,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    "quorum": [
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        0
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    ],
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    "quorum_names": [
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "compute-0"
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    ],
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    "quorum_age": 8,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    "monmap": {
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "epoch": 1,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "min_mon_release_name": "reef",
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "num_mons": 1
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    },
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    "osdmap": {
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "epoch": 1,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "num_osds": 0,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "num_up_osds": 0,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "osd_up_since": 0,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "num_in_osds": 0,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "osd_in_since": 0,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "num_remapped_pgs": 0
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    },
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    "pgmap": {
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "pgs_by_state": [],
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "num_pgs": 0,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "num_pools": 0,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "num_objects": 0,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "data_bytes": 0,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "bytes_used": 0,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "bytes_avail": 0,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "bytes_total": 0
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    },
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    "fsmap": {
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "epoch": 1,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "by_rank": [],
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "up:standby": 0
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    },
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    "mgrmap": {
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "available": false,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "num_standbys": 0,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "modules": [
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:            "iostat",
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:            "nfs",
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:            "restful"
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        ],
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "services": {}
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    },
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    "servicemap": {
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "epoch": 1,
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:        "services": {}
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    },
Nov 25 18:31:18 np0005535838 festive_goodall[76113]:    "progress_events": {}
Nov 25 18:31:18 np0005535838 festive_goodall[76113]: }
Nov 25 18:31:18 np0005535838 systemd[1]: libpod-e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868.scope: Deactivated successfully.
Nov 25 18:31:18 np0005535838 podman[76097]: 2025-11-25 23:31:18.598005271 +0000 UTC m=+0.619557093 container died e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868 (image=quay.io/ceph/ceph:v18, name=festive_goodall, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:18 np0005535838 systemd[1]: var-lib-containers-storage-overlay-ab09c449ced8a14e05233d1a2394f3e0453dc1bd9ead6340eca2d2d580921806-merged.mount: Deactivated successfully.
Nov 25 18:31:18 np0005535838 podman[76097]: 2025-11-25 23:31:18.661816222 +0000 UTC m=+0.683368024 container remove e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868 (image=quay.io/ceph/ceph:v18, name=festive_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 18:31:18 np0005535838 systemd[1]: libpod-conmon-e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868.scope: Deactivated successfully.
Nov 25 18:31:20 np0005535838 podman[76153]: 2025-11-25 23:31:20.71345112 +0000 UTC m=+0.024732734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:20 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'localpool'
Nov 25 18:31:20 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 18:31:20 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'mirroring'
Nov 25 18:31:20 np0005535838 podman[76153]: 2025-11-25 23:31:20.815356211 +0000 UTC m=+0.126637775 container create a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af (image=quay.io/ceph/ceph:v18, name=hungry_cartwright, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:20 np0005535838 systemd[1]: Started libpod-conmon-a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af.scope.
Nov 25 18:31:20 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab5500194b98892d6a6d41c5981ecba23b63236c5711af3818268c60e0b5921/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab5500194b98892d6a6d41c5981ecba23b63236c5711af3818268c60e0b5921/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab5500194b98892d6a6d41c5981ecba23b63236c5711af3818268c60e0b5921/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:20 np0005535838 podman[76153]: 2025-11-25 23:31:20.897936274 +0000 UTC m=+0.209217908 container init a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af (image=quay.io/ceph/ceph:v18, name=hungry_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:31:20 np0005535838 podman[76153]: 2025-11-25 23:31:20.908691403 +0000 UTC m=+0.219972937 container start a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af (image=quay.io/ceph/ceph:v18, name=hungry_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:31:20 np0005535838 podman[76153]: 2025-11-25 23:31:20.913134502 +0000 UTC m=+0.224416056 container attach a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af (image=quay.io/ceph/ceph:v18, name=hungry_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:31:20 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'nfs'
Nov 25 18:31:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 18:31:21 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2970492701' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]: 
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]: {
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    "health": {
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "status": "HEALTH_OK",
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "checks": {},
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "mutes": []
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    },
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    "election_epoch": 5,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    "quorum": [
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        0
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    ],
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    "quorum_names": [
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "compute-0"
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    ],
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    "quorum_age": 11,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    "monmap": {
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "epoch": 1,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "min_mon_release_name": "reef",
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "num_mons": 1
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    },
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    "osdmap": {
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "epoch": 1,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "num_osds": 0,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "num_up_osds": 0,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "osd_up_since": 0,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "num_in_osds": 0,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "osd_in_since": 0,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "num_remapped_pgs": 0
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    },
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    "pgmap": {
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "pgs_by_state": [],
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "num_pgs": 0,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "num_pools": 0,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "num_objects": 0,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "data_bytes": 0,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "bytes_used": 0,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "bytes_avail": 0,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "bytes_total": 0
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    },
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    "fsmap": {
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "epoch": 1,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "by_rank": [],
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "up:standby": 0
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    },
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    "mgrmap": {
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "available": false,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "num_standbys": 0,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "modules": [
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:            "iostat",
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:            "nfs",
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:            "restful"
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        ],
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "services": {}
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    },
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    "servicemap": {
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "epoch": 1,
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:        "services": {}
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    },
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]:    "progress_events": {}
Nov 25 18:31:21 np0005535838 hungry_cartwright[76167]: }
Nov 25 18:31:21 np0005535838 systemd[1]: libpod-a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af.scope: Deactivated successfully.
Nov 25 18:31:21 np0005535838 podman[76153]: 2025-11-25 23:31:21.322158673 +0000 UTC m=+0.633440247 container died a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af (image=quay.io/ceph/ceph:v18, name=hungry_cartwright, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 18:31:21 np0005535838 systemd[1]: var-lib-containers-storage-overlay-3ab5500194b98892d6a6d41c5981ecba23b63236c5711af3818268c60e0b5921-merged.mount: Deactivated successfully.
Nov 25 18:31:21 np0005535838 podman[76153]: 2025-11-25 23:31:21.391357717 +0000 UTC m=+0.702639291 container remove a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af (image=quay.io/ceph/ceph:v18, name=hungry_cartwright, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:31:21 np0005535838 systemd[1]: libpod-conmon-a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af.scope: Deactivated successfully.
Nov 25 18:31:21 np0005535838 ceph-mgr[75954]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 18:31:21 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'orchestrator'
Nov 25 18:31:21 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:21.629+0000 7f0aaebe6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 18:31:22 np0005535838 ceph-mgr[75954]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 18:31:22 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 18:31:22 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:22.233+0000 7f0aaebe6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 18:31:22 np0005535838 ceph-mgr[75954]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 18:31:22 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'osd_support'
Nov 25 18:31:22 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:22.479+0000 7f0aaebe6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 18:31:22 np0005535838 ceph-mgr[75954]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 18:31:22 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 18:31:22 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:22.698+0000 7f0aaebe6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 18:31:22 np0005535838 ceph-mgr[75954]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 18:31:22 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'progress'
Nov 25 18:31:22 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:22.963+0000 7f0aaebe6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 18:31:23 np0005535838 ceph-mgr[75954]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 18:31:23 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'prometheus'
Nov 25 18:31:23 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:23.209+0000 7f0aaebe6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 18:31:23 np0005535838 podman[76207]: 2025-11-25 23:31:23.497863226 +0000 UTC m=+0.071149177 container create 764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53 (image=quay.io/ceph/ceph:v18, name=happy_franklin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:23 np0005535838 systemd[1]: Started libpod-conmon-764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53.scope.
Nov 25 18:31:23 np0005535838 podman[76207]: 2025-11-25 23:31:23.470509663 +0000 UTC m=+0.043795654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:23 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25d1b721875761a5f38658318b0dcffd0fd387e64da96a6137b81509443a555/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25d1b721875761a5f38658318b0dcffd0fd387e64da96a6137b81509443a555/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25d1b721875761a5f38658318b0dcffd0fd387e64da96a6137b81509443a555/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:23 np0005535838 podman[76207]: 2025-11-25 23:31:23.587756615 +0000 UTC m=+0.161042556 container init 764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53 (image=quay.io/ceph/ceph:v18, name=happy_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 18:31:23 np0005535838 podman[76207]: 2025-11-25 23:31:23.598078311 +0000 UTC m=+0.171364252 container start 764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53 (image=quay.io/ceph/ceph:v18, name=happy_franklin, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:31:23 np0005535838 podman[76207]: 2025-11-25 23:31:23.602481189 +0000 UTC m=+0.175767120 container attach 764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53 (image=quay.io/ceph/ceph:v18, name=happy_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 18:31:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 18:31:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/164624347' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 18:31:23 np0005535838 happy_franklin[76223]: 
Nov 25 18:31:23 np0005535838 happy_franklin[76223]: {
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    "health": {
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "status": "HEALTH_OK",
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "checks": {},
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "mutes": []
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    },
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    "election_epoch": 5,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    "quorum": [
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        0
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    ],
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    "quorum_names": [
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "compute-0"
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    ],
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    "quorum_age": 13,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    "monmap": {
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "epoch": 1,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "min_mon_release_name": "reef",
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "num_mons": 1
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    },
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    "osdmap": {
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "epoch": 1,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "num_osds": 0,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "num_up_osds": 0,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "osd_up_since": 0,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "num_in_osds": 0,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "osd_in_since": 0,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "num_remapped_pgs": 0
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    },
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    "pgmap": {
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "pgs_by_state": [],
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "num_pgs": 0,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "num_pools": 0,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "num_objects": 0,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "data_bytes": 0,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "bytes_used": 0,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "bytes_avail": 0,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "bytes_total": 0
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    },
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    "fsmap": {
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "epoch": 1,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "by_rank": [],
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "up:standby": 0
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    },
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    "mgrmap": {
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "available": false,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "num_standbys": 0,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "modules": [
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:            "iostat",
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:            "nfs",
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:            "restful"
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        ],
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "services": {}
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    },
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:    "servicemap": {
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "epoch": 1,
Nov 25 18:31:23 np0005535838 happy_franklin[76223]:        "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 18:31:24 np0005535838 happy_franklin[76223]:        "services": {}
Nov 25 18:31:24 np0005535838 happy_franklin[76223]:    },
Nov 25 18:31:24 np0005535838 happy_franklin[76223]:    "progress_events": {}
Nov 25 18:31:24 np0005535838 happy_franklin[76223]: }
Nov 25 18:31:24 np0005535838 systemd[1]: libpod-764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53.scope: Deactivated successfully.
Nov 25 18:31:24 np0005535838 podman[76207]: 2025-11-25 23:31:24.014562903 +0000 UTC m=+0.587848854 container died 764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53 (image=quay.io/ceph/ceph:v18, name=happy_franklin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 18:31:24 np0005535838 systemd[1]: var-lib-containers-storage-overlay-c25d1b721875761a5f38658318b0dcffd0fd387e64da96a6137b81509443a555-merged.mount: Deactivated successfully.
Nov 25 18:31:24 np0005535838 podman[76207]: 2025-11-25 23:31:24.064343946 +0000 UTC m=+0.637629857 container remove 764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53 (image=quay.io/ceph/ceph:v18, name=happy_franklin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:31:24 np0005535838 systemd[1]: libpod-conmon-764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53.scope: Deactivated successfully.
Nov 25 18:31:24 np0005535838 ceph-mgr[75954]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 18:31:24 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'rbd_support'
Nov 25 18:31:24 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:24.156+0000 7f0aaebe6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 18:31:24 np0005535838 ceph-mgr[75954]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 18:31:24 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'restful'
Nov 25 18:31:24 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:24.452+0000 7f0aaebe6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 18:31:25 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'rgw'
Nov 25 18:31:25 np0005535838 ceph-mgr[75954]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 18:31:25 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'rook'
Nov 25 18:31:25 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:25.791+0000 7f0aaebe6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 18:31:26 np0005535838 podman[76261]: 2025-11-25 23:31:26.149191165 +0000 UTC m=+0.054117740 container create 5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99 (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 18:31:26 np0005535838 systemd[1]: Started libpod-conmon-5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99.scope.
Nov 25 18:31:26 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:26 np0005535838 podman[76261]: 2025-11-25 23:31:26.122706196 +0000 UTC m=+0.027632811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:26 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f4c267b016974d9db2bb6c5027de81069d06296462d533f1199cb3d5017424/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:26 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f4c267b016974d9db2bb6c5027de81069d06296462d533f1199cb3d5017424/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:26 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f4c267b016974d9db2bb6c5027de81069d06296462d533f1199cb3d5017424/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:26 np0005535838 podman[76261]: 2025-11-25 23:31:26.253582533 +0000 UTC m=+0.158509138 container init 5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99 (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 18:31:26 np0005535838 podman[76261]: 2025-11-25 23:31:26.259374998 +0000 UTC m=+0.164301533 container start 5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99 (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:31:26 np0005535838 podman[76261]: 2025-11-25 23:31:26.262908724 +0000 UTC m=+0.167835319 container attach 5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99 (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:31:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 18:31:26 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3438032832' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]: 
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]: {
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    "health": {
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "status": "HEALTH_OK",
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "checks": {},
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "mutes": []
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    },
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    "election_epoch": 5,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    "quorum": [
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        0
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    ],
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    "quorum_names": [
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "compute-0"
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    ],
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    "quorum_age": 16,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    "monmap": {
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "epoch": 1,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "min_mon_release_name": "reef",
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "num_mons": 1
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    },
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    "osdmap": {
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "epoch": 1,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "num_osds": 0,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "num_up_osds": 0,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "osd_up_since": 0,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "num_in_osds": 0,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "osd_in_since": 0,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "num_remapped_pgs": 0
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    },
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    "pgmap": {
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "pgs_by_state": [],
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "num_pgs": 0,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "num_pools": 0,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "num_objects": 0,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "data_bytes": 0,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "bytes_used": 0,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "bytes_avail": 0,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "bytes_total": 0
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    },
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    "fsmap": {
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "epoch": 1,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "by_rank": [],
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "up:standby": 0
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    },
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    "mgrmap": {
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "available": false,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "num_standbys": 0,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "modules": [
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:            "iostat",
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:            "nfs",
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:            "restful"
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        ],
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "services": {}
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    },
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    "servicemap": {
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "epoch": 1,
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:        "services": {}
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    },
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]:    "progress_events": {}
Nov 25 18:31:26 np0005535838 frosty_mccarthy[76279]: }
Nov 25 18:31:26 np0005535838 systemd[1]: libpod-5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99.scope: Deactivated successfully.
Nov 25 18:31:26 np0005535838 podman[76261]: 2025-11-25 23:31:26.656204012 +0000 UTC m=+0.561130547 container died 5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99 (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 18:31:26 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b7f4c267b016974d9db2bb6c5027de81069d06296462d533f1199cb3d5017424-merged.mount: Deactivated successfully.
Nov 25 18:31:26 np0005535838 podman[76261]: 2025-11-25 23:31:26.740717087 +0000 UTC m=+0.645643632 container remove 5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99 (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 25 18:31:26 np0005535838 systemd[1]: libpod-conmon-5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99.scope: Deactivated successfully.
Nov 25 18:31:27 np0005535838 ceph-mgr[75954]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 18:31:27 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'selftest'
Nov 25 18:31:27 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:27.807+0000 7f0aaebe6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 18:31:28 np0005535838 ceph-mgr[75954]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 18:31:28 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'snap_schedule'
Nov 25 18:31:28 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:28.034+0000 7f0aaebe6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 18:31:28 np0005535838 ceph-mgr[75954]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 18:31:28 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:28.266+0000 7f0aaebe6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 18:31:28 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'stats'
Nov 25 18:31:28 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'status'
Nov 25 18:31:28 np0005535838 ceph-mgr[75954]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 18:31:28 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'telegraf'
Nov 25 18:31:28 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:28.735+0000 7f0aaebe6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 18:31:28 np0005535838 podman[76316]: 2025-11-25 23:31:28.896919814 +0000 UTC m=+0.125452881 container create db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960 (image=quay.io/ceph/ceph:v18, name=xenodochial_snyder, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 18:31:28 np0005535838 podman[76316]: 2025-11-25 23:31:28.815090355 +0000 UTC m=+0.043623462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:28 np0005535838 systemd[1]: Started libpod-conmon-db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960.scope.
Nov 25 18:31:28 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:28 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dafbff88ca3e204120d090e4d1f66b749dc2fe6f01dc06751cdf2ad79d7cf0b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:28 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dafbff88ca3e204120d090e4d1f66b749dc2fe6f01dc06751cdf2ad79d7cf0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:28 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dafbff88ca3e204120d090e4d1f66b749dc2fe6f01dc06751cdf2ad79d7cf0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:28 np0005535838 ceph-mgr[75954]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 18:31:28 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'telemetry'
Nov 25 18:31:28 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:28.972+0000 7f0aaebe6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 18:31:28 np0005535838 podman[76316]: 2025-11-25 23:31:28.995881088 +0000 UTC m=+0.224414145 container init db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960 (image=quay.io/ceph/ceph:v18, name=xenodochial_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:31:29 np0005535838 podman[76316]: 2025-11-25 23:31:29.004822814 +0000 UTC m=+0.233355871 container start db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960 (image=quay.io/ceph/ceph:v18, name=xenodochial_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:31:29 np0005535838 podman[76316]: 2025-11-25 23:31:29.009148061 +0000 UTC m=+0.237681158 container attach db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960 (image=quay.io/ceph/ceph:v18, name=xenodochial_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:31:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 18:31:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3887798461' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]: 
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]: {
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    "health": {
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "status": "HEALTH_OK",
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "checks": {},
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "mutes": []
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    },
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    "election_epoch": 5,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    "quorum": [
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        0
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    ],
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    "quorum_names": [
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "compute-0"
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    ],
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    "quorum_age": 19,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    "monmap": {
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "epoch": 1,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "min_mon_release_name": "reef",
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "num_mons": 1
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    },
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    "osdmap": {
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "epoch": 1,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "num_osds": 0,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "num_up_osds": 0,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "osd_up_since": 0,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "num_in_osds": 0,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "osd_in_since": 0,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "num_remapped_pgs": 0
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    },
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    "pgmap": {
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "pgs_by_state": [],
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "num_pgs": 0,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "num_pools": 0,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "num_objects": 0,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "data_bytes": 0,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "bytes_used": 0,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "bytes_avail": 0,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "bytes_total": 0
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    },
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    "fsmap": {
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "epoch": 1,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "by_rank": [],
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "up:standby": 0
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    },
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    "mgrmap": {
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "available": false,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "num_standbys": 0,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "modules": [
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:            "iostat",
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:            "nfs",
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:            "restful"
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        ],
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "services": {}
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    },
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    "servicemap": {
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "epoch": 1,
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:        "services": {}
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    },
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]:    "progress_events": {}
Nov 25 18:31:29 np0005535838 xenodochial_snyder[76333]: }
Nov 25 18:31:29 np0005535838 systemd[1]: libpod-db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960.scope: Deactivated successfully.
Nov 25 18:31:29 np0005535838 podman[76316]: 2025-11-25 23:31:29.406791692 +0000 UTC m=+0.635324799 container died db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960 (image=quay.io/ceph/ceph:v18, name=xenodochial_snyder, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:29 np0005535838 systemd[1]: var-lib-containers-storage-overlay-2dafbff88ca3e204120d090e4d1f66b749dc2fe6f01dc06751cdf2ad79d7cf0b-merged.mount: Deactivated successfully.
Nov 25 18:31:29 np0005535838 podman[76316]: 2025-11-25 23:31:29.463392873 +0000 UTC m=+0.691925920 container remove db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960 (image=quay.io/ceph/ceph:v18, name=xenodochial_snyder, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 18:31:29 np0005535838 systemd[1]: libpod-conmon-db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960.scope: Deactivated successfully.
Nov 25 18:31:29 np0005535838 ceph-mgr[75954]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 18:31:29 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 18:31:29 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:29.526+0000 7f0aaebe6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 18:31:30 np0005535838 ceph-mgr[75954]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 18:31:30 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'volumes'
Nov 25 18:31:30 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:30.133+0000 7f0aaebe6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 18:31:30 np0005535838 ceph-mgr[75954]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 18:31:30 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'zabbix'
Nov 25 18:31:30 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:30.792+0000 7f0aaebe6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 18:31:31 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:31.030+0000 7f0aaebe6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: ms_deliver_dispatch: unhandled message 0x557a5f9b31e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.gwqfsl
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr handle_mgr_map Activating!
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr handle_mgr_map I am now activating
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.gwqfsl(active, starting, since 0.0126147s)
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.gwqfsl", "id": "compute-0.gwqfsl"} v 0) v1
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.gwqfsl", "id": "compute-0.gwqfsl"}]: dispatch
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: balancer
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: crash
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [balancer INFO root] Starting
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : Manager daemon compute-0.gwqfsl is now available
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: devicehealth
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: iostat
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:31:31
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: nfs
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [devicehealth INFO root] Starting
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [balancer INFO root] No pools available
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: orchestrator
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: pg_autoscaler
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: progress
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [progress INFO root] Loading...
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [progress INFO root] No stored events to load
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [progress INFO root] Loaded [] historic events
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] recovery thread starting
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] starting setup
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: rbd_support
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: restful
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: status
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/mirror_snapshot_schedule"} v 0) v1
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/mirror_snapshot_schedule"}]: dispatch
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: telemetry
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [restful WARNING root] server not running: no certificate configured
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] PerfHandler: starting
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TaskHandler: starting
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/trash_purge_schedule"} v 0) v1
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/trash_purge_schedule"}]: dispatch
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] setup complete
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 25 18:31:31 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: volumes
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: Activating manager daemon compute-0.gwqfsl
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: Manager daemon compute-0.gwqfsl is now available
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/mirror_snapshot_schedule"}]: dispatch
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/trash_purge_schedule"}]: dispatch
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:31 np0005535838 podman[76451]: 2025-11-25 23:31:31.545613703 +0000 UTC m=+0.056890595 container create 05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889 (image=quay.io/ceph/ceph:v18, name=charming_noyce, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:31:31 np0005535838 systemd[1]: Started libpod-conmon-05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889.scope.
Nov 25 18:31:31 np0005535838 podman[76451]: 2025-11-25 23:31:31.516542522 +0000 UTC m=+0.027819454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:31 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:31 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1313fba31b604d924f0f1b48e27ae7ad758d6f2b5e2b0d67c59505feb545b170/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:31 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1313fba31b604d924f0f1b48e27ae7ad758d6f2b5e2b0d67c59505feb545b170/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:31 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1313fba31b604d924f0f1b48e27ae7ad758d6f2b5e2b0d67c59505feb545b170/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:31 np0005535838 podman[76451]: 2025-11-25 23:31:31.644762949 +0000 UTC m=+0.156039851 container init 05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889 (image=quay.io/ceph/ceph:v18, name=charming_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:31:31 np0005535838 podman[76451]: 2025-11-25 23:31:31.653353903 +0000 UTC m=+0.164630795 container start 05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889 (image=quay.io/ceph/ceph:v18, name=charming_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 18:31:31 np0005535838 podman[76451]: 2025-11-25 23:31:31.657575849 +0000 UTC m=+0.168852741 container attach 05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889 (image=quay.io/ceph/ceph:v18, name=charming_noyce, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 18:31:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 18:31:32 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/172577660' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 18:31:32 np0005535838 charming_noyce[76468]: 
Nov 25 18:31:32 np0005535838 charming_noyce[76468]: {
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    "health": {
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "status": "HEALTH_OK",
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "checks": {},
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "mutes": []
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    },
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    "election_epoch": 5,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    "quorum": [
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        0
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    ],
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    "quorum_names": [
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "compute-0"
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    ],
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    "quorum_age": 21,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    "monmap": {
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "epoch": 1,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "min_mon_release_name": "reef",
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "num_mons": 1
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    },
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    "osdmap": {
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "epoch": 1,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "num_osds": 0,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "num_up_osds": 0,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "osd_up_since": 0,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "num_in_osds": 0,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "osd_in_since": 0,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "num_remapped_pgs": 0
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    },
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    "pgmap": {
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "pgs_by_state": [],
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "num_pgs": 0,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "num_pools": 0,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "num_objects": 0,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "data_bytes": 0,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "bytes_used": 0,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "bytes_avail": 0,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "bytes_total": 0
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    },
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    "fsmap": {
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "epoch": 1,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "by_rank": [],
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "up:standby": 0
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    },
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    "mgrmap": {
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "available": false,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "num_standbys": 0,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "modules": [
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:            "iostat",
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:            "nfs",
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:            "restful"
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        ],
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "services": {}
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    },
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    "servicemap": {
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "epoch": 1,
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:        "services": {}
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    },
Nov 25 18:31:32 np0005535838 charming_noyce[76468]:    "progress_events": {}
Nov 25 18:31:32 np0005535838 charming_noyce[76468]: }
Nov 25 18:31:32 np0005535838 systemd[1]: libpod-05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889.scope: Deactivated successfully.
Nov 25 18:31:32 np0005535838 podman[76451]: 2025-11-25 23:31:32.052692204 +0000 UTC m=+0.563969096 container died 05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889 (image=quay.io/ceph/ceph:v18, name=charming_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:32 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.gwqfsl(active, since 1.03325s)
Nov 25 18:31:32 np0005535838 systemd[1]: var-lib-containers-storage-overlay-1313fba31b604d924f0f1b48e27ae7ad758d6f2b5e2b0d67c59505feb545b170-merged.mount: Deactivated successfully.
Nov 25 18:31:32 np0005535838 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:32 np0005535838 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:32 np0005535838 podman[76451]: 2025-11-25 23:31:32.108855623 +0000 UTC m=+0.620132485 container remove 05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889 (image=quay.io/ceph/ceph:v18, name=charming_noyce, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:32 np0005535838 systemd[1]: libpod-conmon-05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889.scope: Deactivated successfully.
Nov 25 18:31:33 np0005535838 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 18:31:33 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.gwqfsl(active, since 2s)
Nov 25 18:31:34 np0005535838 podman[76506]: 2025-11-25 23:31:34.209578447 +0000 UTC m=+0.065752090 container create a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b (image=quay.io/ceph/ceph:v18, name=musing_jang, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 18:31:34 np0005535838 systemd[1]: Started libpod-conmon-a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b.scope.
Nov 25 18:31:34 np0005535838 podman[76506]: 2025-11-25 23:31:34.182932701 +0000 UTC m=+0.039106394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:34 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476c73c76b5c3b46a40dd0bd99124bc2404656264ceab6df708f328dda16e7b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476c73c76b5c3b46a40dd0bd99124bc2404656264ceab6df708f328dda16e7b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476c73c76b5c3b46a40dd0bd99124bc2404656264ceab6df708f328dda16e7b3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:34 np0005535838 podman[76506]: 2025-11-25 23:31:34.308344821 +0000 UTC m=+0.164518454 container init a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b (image=quay.io/ceph/ceph:v18, name=musing_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:34 np0005535838 podman[76506]: 2025-11-25 23:31:34.318601754 +0000 UTC m=+0.174775397 container start a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b (image=quay.io/ceph/ceph:v18, name=musing_jang, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:31:34 np0005535838 podman[76506]: 2025-11-25 23:31:34.322449918 +0000 UTC m=+0.178623551 container attach a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b (image=quay.io/ceph/ceph:v18, name=musing_jang, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 18:31:34 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 18:31:34 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376796191' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 18:31:34 np0005535838 musing_jang[76522]: 
Nov 25 18:31:34 np0005535838 musing_jang[76522]: {
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    "health": {
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "status": "HEALTH_OK",
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "checks": {},
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "mutes": []
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    },
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    "election_epoch": 5,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    "quorum": [
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        0
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    ],
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    "quorum_names": [
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "compute-0"
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    ],
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    "quorum_age": 24,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    "monmap": {
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "epoch": 1,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "min_mon_release_name": "reef",
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "num_mons": 1
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    },
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    "osdmap": {
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "epoch": 1,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "num_osds": 0,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "num_up_osds": 0,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "osd_up_since": 0,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "num_in_osds": 0,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "osd_in_since": 0,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "num_remapped_pgs": 0
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    },
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    "pgmap": {
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "pgs_by_state": [],
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "num_pgs": 0,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "num_pools": 0,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "num_objects": 0,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "data_bytes": 0,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "bytes_used": 0,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "bytes_avail": 0,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "bytes_total": 0
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    },
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    "fsmap": {
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "epoch": 1,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "by_rank": [],
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "up:standby": 0
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    },
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    "mgrmap": {
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "available": true,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "num_standbys": 0,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "modules": [
Nov 25 18:31:34 np0005535838 musing_jang[76522]:            "iostat",
Nov 25 18:31:34 np0005535838 musing_jang[76522]:            "nfs",
Nov 25 18:31:34 np0005535838 musing_jang[76522]:            "restful"
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        ],
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "services": {}
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    },
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    "servicemap": {
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "epoch": 1,
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 18:31:34 np0005535838 musing_jang[76522]:        "services": {}
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    },
Nov 25 18:31:34 np0005535838 musing_jang[76522]:    "progress_events": {}
Nov 25 18:31:34 np0005535838 musing_jang[76522]: }
Nov 25 18:31:34 np0005535838 systemd[1]: libpod-a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b.scope: Deactivated successfully.
Nov 25 18:31:35 np0005535838 podman[76548]: 2025-11-25 23:31:35.00179049 +0000 UTC m=+0.027846694 container died a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b (image=quay.io/ceph/ceph:v18, name=musing_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 18:31:35 np0005535838 systemd[1]: var-lib-containers-storage-overlay-476c73c76b5c3b46a40dd0bd99124bc2404656264ceab6df708f328dda16e7b3-merged.mount: Deactivated successfully.
Nov 25 18:31:35 np0005535838 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 18:31:35 np0005535838 podman[76548]: 2025-11-25 23:31:35.058376641 +0000 UTC m=+0.084432795 container remove a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b (image=quay.io/ceph/ceph:v18, name=musing_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 18:31:35 np0005535838 systemd[1]: libpod-conmon-a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b.scope: Deactivated successfully.
Nov 25 18:31:35 np0005535838 podman[76562]: 2025-11-25 23:31:35.165350236 +0000 UTC m=+0.057828070 container create 2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af (image=quay.io/ceph/ceph:v18, name=crazy_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:35 np0005535838 systemd[1]: Started libpod-conmon-2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af.scope.
Nov 25 18:31:35 np0005535838 podman[76562]: 2025-11-25 23:31:35.144083874 +0000 UTC m=+0.036561718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:35 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:35 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f15075a0e7c9f96a0686de49adcb3593e41c76aa529bf7eae6ffc67e94fd950/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:35 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f15075a0e7c9f96a0686de49adcb3593e41c76aa529bf7eae6ffc67e94fd950/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:35 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f15075a0e7c9f96a0686de49adcb3593e41c76aa529bf7eae6ffc67e94fd950/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:35 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f15075a0e7c9f96a0686de49adcb3593e41c76aa529bf7eae6ffc67e94fd950/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:35 np0005535838 podman[76562]: 2025-11-25 23:31:35.270963302 +0000 UTC m=+0.163441146 container init 2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af (image=quay.io/ceph/ceph:v18, name=crazy_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 25 18:31:35 np0005535838 podman[76562]: 2025-11-25 23:31:35.285367862 +0000 UTC m=+0.177845666 container start 2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af (image=quay.io/ceph/ceph:v18, name=crazy_raman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:31:35 np0005535838 podman[76562]: 2025-11-25 23:31:35.289235526 +0000 UTC m=+0.181713330 container attach 2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af (image=quay.io/ceph/ceph:v18, name=crazy_raman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:31:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 25 18:31:35 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3582724958' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 18:31:35 np0005535838 systemd[1]: libpod-2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af.scope: Deactivated successfully.
Nov 25 18:31:35 np0005535838 podman[76604]: 2025-11-25 23:31:35.892163773 +0000 UTC m=+0.042167853 container died 2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af (image=quay.io/ceph/ceph:v18, name=crazy_raman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 18:31:36 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3582724958' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 18:31:36 np0005535838 systemd[1]: var-lib-containers-storage-overlay-0f15075a0e7c9f96a0686de49adcb3593e41c76aa529bf7eae6ffc67e94fd950-merged.mount: Deactivated successfully.
Nov 25 18:31:36 np0005535838 podman[76604]: 2025-11-25 23:31:36.40202138 +0000 UTC m=+0.552025400 container remove 2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af (image=quay.io/ceph/ceph:v18, name=crazy_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 18:31:36 np0005535838 systemd[1]: libpod-conmon-2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af.scope: Deactivated successfully.
Nov 25 18:31:36 np0005535838 podman[76619]: 2025-11-25 23:31:36.507509657 +0000 UTC m=+0.068513807 container create 66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1 (image=quay.io/ceph/ceph:v18, name=gifted_hertz, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:36 np0005535838 systemd[1]: Started libpod-conmon-66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1.scope.
Nov 25 18:31:36 np0005535838 podman[76619]: 2025-11-25 23:31:36.474815643 +0000 UTC m=+0.035819883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:36 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:36 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c8a9dabf4c776b4ad498d00bc40198f8f663989f7cf92b0e21d6272c40ebcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:36 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c8a9dabf4c776b4ad498d00bc40198f8f663989f7cf92b0e21d6272c40ebcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:36 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c8a9dabf4c776b4ad498d00bc40198f8f663989f7cf92b0e21d6272c40ebcd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:36 np0005535838 podman[76619]: 2025-11-25 23:31:36.598571533 +0000 UTC m=+0.159575753 container init 66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1 (image=quay.io/ceph/ceph:v18, name=gifted_hertz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:36 np0005535838 podman[76619]: 2025-11-25 23:31:36.610860008 +0000 UTC m=+0.171864178 container start 66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1 (image=quay.io/ceph/ceph:v18, name=gifted_hertz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 18:31:36 np0005535838 podman[76619]: 2025-11-25 23:31:36.6159017 +0000 UTC m=+0.176905880 container attach 66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1 (image=quay.io/ceph/ceph:v18, name=gifted_hertz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:37 np0005535838 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 18:31:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 25 18:31:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1382411213' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 25 18:31:37 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/1382411213' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 25 18:31:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1382411213' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 25 18:31:37 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.gwqfsl(active, since 6s)
Nov 25 18:31:37 np0005535838 systemd[1]: libpod-66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1.scope: Deactivated successfully.
Nov 25 18:31:37 np0005535838 podman[76619]: 2025-11-25 23:31:37.33683611 +0000 UTC m=+0.897840280 container died 66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1 (image=quay.io/ceph/ceph:v18, name=gifted_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 18:31:37 np0005535838 systemd[1]: var-lib-containers-storage-overlay-27c8a9dabf4c776b4ad498d00bc40198f8f663989f7cf92b0e21d6272c40ebcd-merged.mount: Deactivated successfully.
Nov 25 18:31:37 np0005535838 podman[76619]: 2025-11-25 23:31:37.386270487 +0000 UTC m=+0.947274627 container remove 66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1 (image=quay.io/ceph/ceph:v18, name=gifted_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 18:31:37 np0005535838 systemd[1]: libpod-conmon-66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1.scope: Deactivated successfully.
Nov 25 18:31:37 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: ignoring --setuser ceph since I am not root
Nov 25 18:31:37 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: ignoring --setgroup ceph since I am not root
Nov 25 18:31:37 np0005535838 ceph-mgr[75954]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 18:31:37 np0005535838 ceph-mgr[75954]: pidfile_write: ignore empty --pid-file
Nov 25 18:31:37 np0005535838 podman[76676]: 2025-11-25 23:31:37.472404572 +0000 UTC m=+0.058767986 container create 87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717 (image=quay.io/ceph/ceph:v18, name=crazy_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:31:37 np0005535838 systemd[1]: Started libpod-conmon-87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717.scope.
Nov 25 18:31:37 np0005535838 podman[76676]: 2025-11-25 23:31:37.443256681 +0000 UTC m=+0.029620145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:37 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'alerts'
Nov 25 18:31:37 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:37 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ca21aee2973ed1fb5c01fa7680567849e6697452926988a5504a2f8dd2010c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:37 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ca21aee2973ed1fb5c01fa7680567849e6697452926988a5504a2f8dd2010c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:37 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ca21aee2973ed1fb5c01fa7680567849e6697452926988a5504a2f8dd2010c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:37 np0005535838 podman[76676]: 2025-11-25 23:31:37.594839143 +0000 UTC m=+0.181202637 container init 87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717 (image=quay.io/ceph/ceph:v18, name=crazy_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 18:31:37 np0005535838 podman[76676]: 2025-11-25 23:31:37.601041942 +0000 UTC m=+0.187405366 container start 87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717 (image=quay.io/ceph/ceph:v18, name=crazy_elion, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:31:37 np0005535838 podman[76676]: 2025-11-25 23:31:37.605023096 +0000 UTC m=+0.191386520 container attach 87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717 (image=quay.io/ceph/ceph:v18, name=crazy_elion, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:37 np0005535838 ceph-mgr[75954]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 18:31:37 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'balancer'
Nov 25 18:31:37 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:37.842+0000 7f36d5dd2140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 18:31:38 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:38.077+0000 7f36d5dd2140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 18:31:38 np0005535838 ceph-mgr[75954]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 18:31:38 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'cephadm'
Nov 25 18:31:38 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 18:31:38 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/723603241' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 18:31:38 np0005535838 crazy_elion[76716]: {
Nov 25 18:31:38 np0005535838 crazy_elion[76716]:    "epoch": 5,
Nov 25 18:31:38 np0005535838 crazy_elion[76716]:    "available": true,
Nov 25 18:31:38 np0005535838 crazy_elion[76716]:    "active_name": "compute-0.gwqfsl",
Nov 25 18:31:38 np0005535838 crazy_elion[76716]:    "num_standby": 0
Nov 25 18:31:38 np0005535838 crazy_elion[76716]: }
Nov 25 18:31:38 np0005535838 systemd[1]: libpod-87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717.scope: Deactivated successfully.
Nov 25 18:31:38 np0005535838 podman[76676]: 2025-11-25 23:31:38.17367948 +0000 UTC m=+0.760042934 container died 87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717 (image=quay.io/ceph/ceph:v18, name=crazy_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 18:31:38 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b4ca21aee2973ed1fb5c01fa7680567849e6697452926988a5504a2f8dd2010c-merged.mount: Deactivated successfully.
Nov 25 18:31:38 np0005535838 podman[76676]: 2025-11-25 23:31:38.221799639 +0000 UTC m=+0.808163023 container remove 87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717 (image=quay.io/ceph/ceph:v18, name=crazy_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:38 np0005535838 systemd[1]: libpod-conmon-87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717.scope: Deactivated successfully.
Nov 25 18:31:38 np0005535838 podman[76755]: 2025-11-25 23:31:38.297442429 +0000 UTC m=+0.053491873 container create fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63 (image=quay.io/ceph/ceph:v18, name=vigorous_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:31:38 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/1382411213' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 25 18:31:38 np0005535838 systemd[1]: Started libpod-conmon-fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63.scope.
Nov 25 18:31:38 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a6f1c63d6c5d6f8f347ecd2658e47baef3680b19f1c84a5b0876ec47efc61e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a6f1c63d6c5d6f8f347ecd2658e47baef3680b19f1c84a5b0876ec47efc61e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a6f1c63d6c5d6f8f347ecd2658e47baef3680b19f1c84a5b0876ec47efc61e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:38 np0005535838 podman[76755]: 2025-11-25 23:31:38.357381602 +0000 UTC m=+0.113431026 container init fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63 (image=quay.io/ceph/ceph:v18, name=vigorous_keldysh, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 18:31:38 np0005535838 podman[76755]: 2025-11-25 23:31:38.367601965 +0000 UTC m=+0.123651399 container start fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63 (image=quay.io/ceph/ceph:v18, name=vigorous_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:31:38 np0005535838 podman[76755]: 2025-11-25 23:31:38.275361272 +0000 UTC m=+0.031410696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:38 np0005535838 podman[76755]: 2025-11-25 23:31:38.371229618 +0000 UTC m=+0.127279052 container attach fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63 (image=quay.io/ceph/ceph:v18, name=vigorous_keldysh, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 18:31:39 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'crash'
Nov 25 18:31:40 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:40.189+0000 7f36d5dd2140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 18:31:40 np0005535838 ceph-mgr[75954]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 18:31:40 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'dashboard'
Nov 25 18:31:41 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'devicehealth'
Nov 25 18:31:41 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:41.745+0000 7f36d5dd2140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 18:31:41 np0005535838 ceph-mgr[75954]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 18:31:41 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 18:31:42 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 18:31:42 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 18:31:42 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]:  from numpy import show_config as show_numpy_config
Nov 25 18:31:42 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:42.234+0000 7f36d5dd2140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 18:31:42 np0005535838 ceph-mgr[75954]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 18:31:42 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'influx'
Nov 25 18:31:42 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:42.455+0000 7f36d5dd2140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 18:31:42 np0005535838 ceph-mgr[75954]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 18:31:42 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'insights'
Nov 25 18:31:42 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'iostat'
Nov 25 18:31:42 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:42.915+0000 7f36d5dd2140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 18:31:42 np0005535838 ceph-mgr[75954]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 18:31:42 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'k8sevents'
Nov 25 18:31:44 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'localpool'
Nov 25 18:31:44 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 18:31:45 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'mirroring'
Nov 25 18:31:45 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'nfs'
Nov 25 18:31:46 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:46.267+0000 7f36d5dd2140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 18:31:46 np0005535838 ceph-mgr[75954]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 18:31:46 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'orchestrator'
Nov 25 18:31:46 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:46.915+0000 7f36d5dd2140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 18:31:46 np0005535838 ceph-mgr[75954]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 18:31:46 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 18:31:47 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:47.179+0000 7f36d5dd2140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 18:31:47 np0005535838 ceph-mgr[75954]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 18:31:47 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'osd_support'
Nov 25 18:31:47 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:47.399+0000 7f36d5dd2140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 18:31:47 np0005535838 ceph-mgr[75954]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 18:31:47 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 18:31:47 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:47.656+0000 7f36d5dd2140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 18:31:47 np0005535838 ceph-mgr[75954]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 18:31:47 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'progress'
Nov 25 18:31:47 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:47.877+0000 7f36d5dd2140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 18:31:47 np0005535838 ceph-mgr[75954]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 18:31:47 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'prometheus'
Nov 25 18:31:48 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:48.836+0000 7f36d5dd2140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 18:31:48 np0005535838 ceph-mgr[75954]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 18:31:48 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'rbd_support'
Nov 25 18:31:49 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:49.124+0000 7f36d5dd2140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 18:31:49 np0005535838 ceph-mgr[75954]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 18:31:49 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'restful'
Nov 25 18:31:49 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'rgw'
Nov 25 18:31:50 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:50.548+0000 7f36d5dd2140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 18:31:50 np0005535838 ceph-mgr[75954]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 18:31:50 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'rook'
Nov 25 18:31:52 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:52.568+0000 7f36d5dd2140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 18:31:52 np0005535838 ceph-mgr[75954]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 18:31:52 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'selftest'
Nov 25 18:31:52 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:52.802+0000 7f36d5dd2140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 18:31:52 np0005535838 ceph-mgr[75954]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 18:31:52 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'snap_schedule'
Nov 25 18:31:53 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:53.034+0000 7f36d5dd2140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 18:31:53 np0005535838 ceph-mgr[75954]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 18:31:53 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'stats'
Nov 25 18:31:53 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'status'
Nov 25 18:31:53 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:53.523+0000 7f36d5dd2140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 18:31:53 np0005535838 ceph-mgr[75954]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 18:31:53 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'telegraf'
Nov 25 18:31:53 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:53.769+0000 7f36d5dd2140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 18:31:53 np0005535838 ceph-mgr[75954]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 18:31:53 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'telemetry'
Nov 25 18:31:54 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:54.372+0000 7f36d5dd2140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 18:31:54 np0005535838 ceph-mgr[75954]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 18:31:54 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 18:31:55 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:55.025+0000 7f36d5dd2140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'volumes'
Nov 25 18:31:55 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:55.725+0000 7f36d5dd2140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: mgr[py] Loading python module 'zabbix'
Nov 25 18:31:55 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:55.954+0000 7f36d5dd2140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : Active manager daemon compute-0.gwqfsl restarted
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.gwqfsl
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: ms_deliver_dispatch: unhandled message 0x564b3d8431e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: mgr handle_mgr_map Activating!
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: mgr handle_mgr_map I am now activating
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.gwqfsl(active, starting, since 0.0175253s)
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.gwqfsl", "id": "compute-0.gwqfsl"} v 0) v1
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.gwqfsl", "id": "compute-0.gwqfsl"}]: dispatch
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: balancer
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: [balancer INFO root] Starting
Nov 25 18:31:55 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : Manager daemon compute-0.gwqfsl is now available
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:31:55
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:31:55 np0005535838 ceph-mgr[75954]: [balancer INFO root] No pools available
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: Active manager daemon compute-0.gwqfsl restarted
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: Activating manager daemon compute-0.gwqfsl
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: Manager daemon compute-0.gwqfsl is now available
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: cephadm
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: crash
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: devicehealth
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: iostat
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [devicehealth INFO root] Starting
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: nfs
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: orchestrator
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: pg_autoscaler
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: progress
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [progress INFO root] Loading...
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [progress INFO root] No stored events to load
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [progress INFO root] Loaded [] historic events
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] recovery thread starting
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] starting setup
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: rbd_support
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/mirror_snapshot_schedule"} v 0) v1
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/mirror_snapshot_schedule"}]: dispatch
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: restful
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: status
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [restful WARNING root] server not running: no certificate configured
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: telemetry
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] PerfHandler: starting
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TaskHandler: starting
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/trash_purge_schedule"} v 0) v1
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/trash_purge_schedule"}]: dispatch
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] setup complete
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: mgr load Constructed class from module: volumes
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 25 18:31:56 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.gwqfsl(active, since 1.02756s)
Nov 25 18:31:56 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 25 18:31:56 np0005535838 vigorous_keldysh[76771]: {
Nov 25 18:31:56 np0005535838 vigorous_keldysh[76771]:    "mgrmap_epoch": 7,
Nov 25 18:31:56 np0005535838 vigorous_keldysh[76771]:    "initialized": true
Nov 25 18:31:56 np0005535838 vigorous_keldysh[76771]: }
Nov 25 18:31:57 np0005535838 systemd[1]: libpod-fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63.scope: Deactivated successfully.
Nov 25 18:31:57 np0005535838 podman[76755]: 2025-11-25 23:31:57.013314822 +0000 UTC m=+18.769364276 container died fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63 (image=quay.io/ceph/ceph:v18, name=vigorous_keldysh, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 18:31:57 np0005535838 ceph-mon[75654]: Found migration_current of "None". Setting to last migration.
Nov 25 18:31:57 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:57 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:57 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/mirror_snapshot_schedule"}]: dispatch
Nov 25 18:31:57 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/trash_purge_schedule"}]: dispatch
Nov 25 18:31:57 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:57 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:57 np0005535838 systemd[1]: var-lib-containers-storage-overlay-a2a6f1c63d6c5d6f8f347ecd2658e47baef3680b19f1c84a5b0876ec47efc61e-merged.mount: Deactivated successfully.
Nov 25 18:31:57 np0005535838 podman[76755]: 2025-11-25 23:31:57.073765277 +0000 UTC m=+18.829814721 container remove fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63 (image=quay.io/ceph/ceph:v18, name=vigorous_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 18:31:57 np0005535838 systemd[1]: libpod-conmon-fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63.scope: Deactivated successfully.
Nov 25 18:31:57 np0005535838 podman[76930]: 2025-11-25 23:31:57.177830265 +0000 UTC m=+0.073454598 container create e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce (image=quay.io/ceph/ceph:v18, name=heuristic_yalow, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 18:31:57 np0005535838 systemd[1]: Started libpod-conmon-e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce.scope.
Nov 25 18:31:57 np0005535838 podman[76930]: 2025-11-25 23:31:57.138096107 +0000 UTC m=+0.033720490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:57 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:57 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662330ff237cf6dda9ea9d42b56274a7f96301fbaf6207fa352ec6f794faa7dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:57 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662330ff237cf6dda9ea9d42b56274a7f96301fbaf6207fa352ec6f794faa7dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:57 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662330ff237cf6dda9ea9d42b56274a7f96301fbaf6207fa352ec6f794faa7dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:57 np0005535838 podman[76930]: 2025-11-25 23:31:57.269086782 +0000 UTC m=+0.164711095 container init e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce (image=quay.io/ceph/ceph:v18, name=heuristic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 18:31:57 np0005535838 podman[76930]: 2025-11-25 23:31:57.274718227 +0000 UTC m=+0.170342520 container start e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce (image=quay.io/ceph/ceph:v18, name=heuristic_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:31:57 np0005535838 podman[76930]: 2025-11-25 23:31:57.277550824 +0000 UTC m=+0.173175147 container attach e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce (image=quay.io/ceph/ceph:v18, name=heuristic_yalow, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:31:57 np0005535838 ceph-mgr[75954]: [cephadm INFO cherrypy.error] [25/Nov/2025:23:31:57] ENGINE Bus STARTING
Nov 25 18:31:57 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : [25/Nov/2025:23:31:57] ENGINE Bus STARTING
Nov 25 18:31:57 np0005535838 ceph-mgr[75954]: [cephadm INFO cherrypy.error] [25/Nov/2025:23:31:57] ENGINE Serving on http://192.168.122.100:8765
Nov 25 18:31:57 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : [25/Nov/2025:23:31:57] ENGINE Serving on http://192.168.122.100:8765
Nov 25 18:31:57 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:31:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 25 18:31:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 18:31:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 18:31:57 np0005535838 systemd[1]: libpod-e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce.scope: Deactivated successfully.
Nov 25 18:31:57 np0005535838 podman[76996]: 2025-11-25 23:31:57.864688602 +0000 UTC m=+0.023653868 container died e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce (image=quay.io/ceph/ceph:v18, name=heuristic_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 18:31:57 np0005535838 systemd[1]: var-lib-containers-storage-overlay-662330ff237cf6dda9ea9d42b56274a7f96301fbaf6207fa352ec6f794faa7dc-merged.mount: Deactivated successfully.
Nov 25 18:31:57 np0005535838 ceph-mgr[75954]: [cephadm INFO cherrypy.error] [25/Nov/2025:23:31:57] ENGINE Serving on https://192.168.122.100:7150
Nov 25 18:31:57 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : [25/Nov/2025:23:31:57] ENGINE Serving on https://192.168.122.100:7150
Nov 25 18:31:57 np0005535838 ceph-mgr[75954]: [cephadm INFO cherrypy.error] [25/Nov/2025:23:31:57] ENGINE Bus STARTED
Nov 25 18:31:57 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : [25/Nov/2025:23:31:57] ENGINE Bus STARTED
Nov 25 18:31:57 np0005535838 ceph-mgr[75954]: [cephadm INFO cherrypy.error] [25/Nov/2025:23:31:57] ENGINE Client ('192.168.122.100', 50912) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 18:31:57 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : [25/Nov/2025:23:31:57] ENGINE Client ('192.168.122.100', 50912) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 18:31:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 18:31:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 18:31:57 np0005535838 podman[76996]: 2025-11-25 23:31:57.900457945 +0000 UTC m=+0.059423191 container remove e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce (image=quay.io/ceph/ceph:v18, name=heuristic_yalow, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:31:57 np0005535838 systemd[1]: libpod-conmon-e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce.scope: Deactivated successfully.
Nov 25 18:31:57 np0005535838 podman[77011]: 2025-11-25 23:31:57.973236737 +0000 UTC m=+0.044434767 container create 998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8 (image=quay.io/ceph/ceph:v18, name=trusting_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 18:31:57 np0005535838 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 18:31:58 np0005535838 systemd[1]: Started libpod-conmon-998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8.scope.
Nov 25 18:31:58 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b98067781e038edb030e3d4a4934358ac018e601f2573e05c3ed3906800b7b71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b98067781e038edb030e3d4a4934358ac018e601f2573e05c3ed3906800b7b71/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b98067781e038edb030e3d4a4934358ac018e601f2573e05c3ed3906800b7b71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:58 np0005535838 podman[77011]: 2025-11-25 23:31:57.949008557 +0000 UTC m=+0.020206567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:58 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:58 np0005535838 podman[77011]: 2025-11-25 23:31:58.05743947 +0000 UTC m=+0.128637480 container init 998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8 (image=quay.io/ceph/ceph:v18, name=trusting_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 18:31:58 np0005535838 podman[77011]: 2025-11-25 23:31:58.062762044 +0000 UTC m=+0.133960064 container start 998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8 (image=quay.io/ceph/ceph:v18, name=trusting_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 18:31:58 np0005535838 podman[77011]: 2025-11-25 23:31:58.068030177 +0000 UTC m=+0.139228167 container attach 998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8 (image=quay.io/ceph/ceph:v18, name=trusting_easley, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 18:31:58 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:31:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 25 18:31:58 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:58 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Set ssh ssh_user
Nov 25 18:31:58 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 25 18:31:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 25 18:31:58 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:58 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Set ssh ssh_config
Nov 25 18:31:58 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 25 18:31:58 np0005535838 ceph-mgr[75954]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 25 18:31:58 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 25 18:31:58 np0005535838 trusting_easley[77027]: ssh user set to ceph-admin. sudo will be used
Nov 25 18:31:58 np0005535838 systemd[1]: libpod-998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8.scope: Deactivated successfully.
Nov 25 18:31:58 np0005535838 podman[77055]: 2025-11-25 23:31:58.655661368 +0000 UTC m=+0.036532808 container died 998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8 (image=quay.io/ceph/ceph:v18, name=trusting_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 18:31:58 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b98067781e038edb030e3d4a4934358ac018e601f2573e05c3ed3906800b7b71-merged.mount: Deactivated successfully.
Nov 25 18:31:58 np0005535838 podman[77055]: 2025-11-25 23:31:58.705684309 +0000 UTC m=+0.086555759 container remove 998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8 (image=quay.io/ceph/ceph:v18, name=trusting_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:31:58 np0005535838 systemd[1]: libpod-conmon-998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8.scope: Deactivated successfully.
Nov 25 18:31:58 np0005535838 podman[77070]: 2025-11-25 23:31:58.802044108 +0000 UTC m=+0.056619584 container create e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892 (image=quay.io/ceph/ceph:v18, name=gracious_heisenberg, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:31:58 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.gwqfsl(active, since 2s)
Nov 25 18:31:58 np0005535838 systemd[1]: Started libpod-conmon-e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892.scope.
Nov 25 18:31:58 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323513e0de7de9f204a81c3d643ea127bf7134ac6b6a7368908867ddb2cbd4b6/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323513e0de7de9f204a81c3d643ea127bf7134ac6b6a7368908867ddb2cbd4b6/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323513e0de7de9f204a81c3d643ea127bf7134ac6b6a7368908867ddb2cbd4b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323513e0de7de9f204a81c3d643ea127bf7134ac6b6a7368908867ddb2cbd4b6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323513e0de7de9f204a81c3d643ea127bf7134ac6b6a7368908867ddb2cbd4b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:58 np0005535838 podman[77070]: 2025-11-25 23:31:58.782090114 +0000 UTC m=+0.036665600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:58 np0005535838 podman[77070]: 2025-11-25 23:31:58.890831649 +0000 UTC m=+0.145407175 container init e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892 (image=quay.io/ceph/ceph:v18, name=gracious_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:31:58 np0005535838 podman[77070]: 2025-11-25 23:31:58.901512966 +0000 UTC m=+0.156088452 container start e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892 (image=quay.io/ceph/ceph:v18, name=gracious_heisenberg, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 18:31:58 np0005535838 podman[77070]: 2025-11-25 23:31:58.905788103 +0000 UTC m=+0.160363579 container attach e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892 (image=quay.io/ceph/ceph:v18, name=gracious_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 18:31:59 np0005535838 ceph-mon[75654]: [25/Nov/2025:23:31:57] ENGINE Bus STARTING
Nov 25 18:31:59 np0005535838 ceph-mon[75654]: [25/Nov/2025:23:31:57] ENGINE Serving on http://192.168.122.100:8765
Nov 25 18:31:59 np0005535838 ceph-mon[75654]: [25/Nov/2025:23:31:57] ENGINE Serving on https://192.168.122.100:7150
Nov 25 18:31:59 np0005535838 ceph-mon[75654]: [25/Nov/2025:23:31:57] ENGINE Bus STARTED
Nov 25 18:31:59 np0005535838 ceph-mon[75654]: [25/Nov/2025:23:31:57] ENGINE Client ('192.168.122.100', 50912) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 18:31:59 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:59 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:59 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:31:59 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 25 18:31:59 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:31:59 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 25 18:31:59 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 25 18:31:59 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Set ssh private key
Nov 25 18:31:59 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 25 18:31:59 np0005535838 systemd[1]: libpod-e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892.scope: Deactivated successfully.
Nov 25 18:31:59 np0005535838 podman[77070]: 2025-11-25 23:31:59.496136002 +0000 UTC m=+0.750711458 container died e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892 (image=quay.io/ceph/ceph:v18, name=gracious_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 18:31:59 np0005535838 systemd[1]: var-lib-containers-storage-overlay-323513e0de7de9f204a81c3d643ea127bf7134ac6b6a7368908867ddb2cbd4b6-merged.mount: Deactivated successfully.
Nov 25 18:31:59 np0005535838 podman[77070]: 2025-11-25 23:31:59.53624068 +0000 UTC m=+0.790816126 container remove e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892 (image=quay.io/ceph/ceph:v18, name=gracious_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:31:59 np0005535838 systemd[1]: libpod-conmon-e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892.scope: Deactivated successfully.
Nov 25 18:31:59 np0005535838 podman[77122]: 2025-11-25 23:31:59.629688041 +0000 UTC m=+0.059435140 container create 8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c (image=quay.io/ceph/ceph:v18, name=ecstatic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 18:31:59 np0005535838 systemd[1]: Started libpod-conmon-8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c.scope.
Nov 25 18:31:59 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:31:59 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d54d73bd5714a8dac2225f47705eaff4016998969a565b75a9bc97a753d1db8/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:59 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d54d73bd5714a8dac2225f47705eaff4016998969a565b75a9bc97a753d1db8/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:59 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d54d73bd5714a8dac2225f47705eaff4016998969a565b75a9bc97a753d1db8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:59 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d54d73bd5714a8dac2225f47705eaff4016998969a565b75a9bc97a753d1db8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:59 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d54d73bd5714a8dac2225f47705eaff4016998969a565b75a9bc97a753d1db8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:31:59 np0005535838 podman[77122]: 2025-11-25 23:31:59.612384674 +0000 UTC m=+0.042131793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:31:59 np0005535838 podman[77122]: 2025-11-25 23:31:59.715674695 +0000 UTC m=+0.145421834 container init 8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c (image=quay.io/ceph/ceph:v18, name=ecstatic_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 18:31:59 np0005535838 podman[77122]: 2025-11-25 23:31:59.727133107 +0000 UTC m=+0.156880256 container start 8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c (image=quay.io/ceph/ceph:v18, name=ecstatic_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 25 18:31:59 np0005535838 podman[77122]: 2025-11-25 23:31:59.731933996 +0000 UTC m=+0.161681125 container attach 8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c (image=quay.io/ceph/ceph:v18, name=ecstatic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:31:59 np0005535838 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 18:32:00 np0005535838 ceph-mon[75654]: Set ssh ssh_user
Nov 25 18:32:00 np0005535838 ceph-mon[75654]: Set ssh ssh_config
Nov 25 18:32:00 np0005535838 ceph-mon[75654]: ssh user set to ceph-admin. sudo will be used
Nov 25 18:32:00 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019918601 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:32:00 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:32:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 25 18:32:00 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:00 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 25 18:32:00 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 25 18:32:00 np0005535838 systemd[1]: libpod-8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c.scope: Deactivated successfully.
Nov 25 18:32:00 np0005535838 conmon[77138]: conmon 8c62c1b85709ced067be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c.scope/container/memory.events
Nov 25 18:32:00 np0005535838 podman[77122]: 2025-11-25 23:32:00.304453215 +0000 UTC m=+0.734200324 container died 8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c (image=quay.io/ceph/ceph:v18, name=ecstatic_lederberg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 18:32:00 np0005535838 systemd[1]: var-lib-containers-storage-overlay-6d54d73bd5714a8dac2225f47705eaff4016998969a565b75a9bc97a753d1db8-merged.mount: Deactivated successfully.
Nov 25 18:32:00 np0005535838 podman[77122]: 2025-11-25 23:32:00.347447462 +0000 UTC m=+0.777194561 container remove 8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c (image=quay.io/ceph/ceph:v18, name=ecstatic_lederberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:00 np0005535838 systemd[1]: libpod-conmon-8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c.scope: Deactivated successfully.
Nov 25 18:32:00 np0005535838 podman[77177]: 2025-11-25 23:32:00.406953991 +0000 UTC m=+0.038052577 container create a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d (image=quay.io/ceph/ceph:v18, name=stupefied_chaplygin, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:32:00 np0005535838 systemd[1]: Started libpod-conmon-a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d.scope.
Nov 25 18:32:00 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651002a038bad2846e8708414c55d7a4b9fb28be9c71571dc470a207340a3436/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651002a038bad2846e8708414c55d7a4b9fb28be9c71571dc470a207340a3436/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651002a038bad2846e8708414c55d7a4b9fb28be9c71571dc470a207340a3436/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:00 np0005535838 podman[77177]: 2025-11-25 23:32:00.38916553 +0000 UTC m=+0.020264106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:00 np0005535838 podman[77177]: 2025-11-25 23:32:00.488072996 +0000 UTC m=+0.119171612 container init a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d (image=quay.io/ceph/ceph:v18, name=stupefied_chaplygin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:00 np0005535838 podman[77177]: 2025-11-25 23:32:00.500437652 +0000 UTC m=+0.131536248 container start a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d (image=quay.io/ceph/ceph:v18, name=stupefied_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 18:32:00 np0005535838 podman[77177]: 2025-11-25 23:32:00.504410947 +0000 UTC m=+0.135509533 container attach a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d (image=quay.io/ceph/ceph:v18, name=stupefied_chaplygin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 18:32:01 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:32:01 np0005535838 stupefied_chaplygin[77193]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3THDlDiSF6PqONtAShZXCwFchR4rHTy5LPyheMSOgxaFzRe03pZG6osav83q8oZrqgKFplCm+9dnQIrFcmd7iLbDo24HWeXXUfzrsvTZgIVI8UN/6hnGxYC+5h3E5m4klPM9IcXV9KAVHDNunhWU2jIRJzASluLI0kbZt2MJ4nuPgoD03C5hBPXlTc0ndBuSmPCNz6GYl+sX0h95buuaLUKCm7G/cnKmNyXsqPZP5FdeXco80uvTdhIbEGqTRKqWph3FI18LoAWzWV0yPMthlcnNRy3ieGkVLO/IYzthfRxVEFtCMLFX12YH302IwsnQaxf8vStRKCrPT0z4DEPwU/5gxK+2W4pKStrPtaFR+zaMkyUUbzGPynrcln+k4szFjrcLUCK5aogyDdVEDxGP06YshuteHcUD+aiwo38MyEJXrVT8BhnI7TawvHvIpDNzD34yERn2J6wCMS41TAsAqTh7oE0P3kMw1DuCEQNMzJETLNmYLOD416M3lDsQz4vU= zuul@controller
Nov 25 18:32:01 np0005535838 systemd[1]: libpod-a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d.scope: Deactivated successfully.
Nov 25 18:32:01 np0005535838 podman[77177]: 2025-11-25 23:32:01.035504237 +0000 UTC m=+0.666602863 container died a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d (image=quay.io/ceph/ceph:v18, name=stupefied_chaplygin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 18:32:01 np0005535838 systemd[1]: var-lib-containers-storage-overlay-651002a038bad2846e8708414c55d7a4b9fb28be9c71571dc470a207340a3436-merged.mount: Deactivated successfully.
Nov 25 18:32:01 np0005535838 podman[77177]: 2025-11-25 23:32:01.083884764 +0000 UTC m=+0.714983320 container remove a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d (image=quay.io/ceph/ceph:v18, name=stupefied_chaplygin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:01 np0005535838 systemd[1]: libpod-conmon-a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d.scope: Deactivated successfully.
Nov 25 18:32:01 np0005535838 podman[77231]: 2025-11-25 23:32:01.147705543 +0000 UTC m=+0.041713969 container create 0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53 (image=quay.io/ceph/ceph:v18, name=fervent_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:32:01 np0005535838 systemd[1]: Started libpod-conmon-0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53.scope.
Nov 25 18:32:01 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:01 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38e401c6be48f144acbe3d96348e5770b32d2ce483063e17922ca325ee76079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:01 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38e401c6be48f144acbe3d96348e5770b32d2ce483063e17922ca325ee76079/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:01 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38e401c6be48f144acbe3d96348e5770b32d2ce483063e17922ca325ee76079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:01 np0005535838 podman[77231]: 2025-11-25 23:32:01.21922844 +0000 UTC m=+0.113236896 container init 0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53 (image=quay.io/ceph/ceph:v18, name=fervent_carver, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 18:32:01 np0005535838 podman[77231]: 2025-11-25 23:32:01.129022643 +0000 UTC m=+0.023031079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:01 np0005535838 podman[77231]: 2025-11-25 23:32:01.227180973 +0000 UTC m=+0.121189389 container start 0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53 (image=quay.io/ceph/ceph:v18, name=fervent_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:01 np0005535838 podman[77231]: 2025-11-25 23:32:01.231133469 +0000 UTC m=+0.125141935 container attach 0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53 (image=quay.io/ceph/ceph:v18, name=fervent_carver, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 18:32:01 np0005535838 ceph-mon[75654]: Set ssh ssh_identity_key
Nov 25 18:32:01 np0005535838 ceph-mon[75654]: Set ssh private key
Nov 25 18:32:01 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:01 np0005535838 ceph-mon[75654]: Set ssh ssh_identity_pub
Nov 25 18:32:01 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:32:01 np0005535838 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 18:32:01 np0005535838 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 18:32:01 np0005535838 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 18:32:01 np0005535838 systemd-logind[789]: New session 21 of user ceph-admin.
Nov 25 18:32:02 np0005535838 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 18:32:02 np0005535838 systemd[1]: Starting User Manager for UID 42477...
Nov 25 18:32:02 np0005535838 systemd[77281]: Queued start job for default target Main User Target.
Nov 25 18:32:02 np0005535838 systemd[77281]: Created slice User Application Slice.
Nov 25 18:32:02 np0005535838 systemd[77281]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 18:32:02 np0005535838 systemd[77281]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 18:32:02 np0005535838 systemd[77281]: Reached target Paths.
Nov 25 18:32:02 np0005535838 systemd[77281]: Reached target Timers.
Nov 25 18:32:02 np0005535838 systemd[77281]: Starting D-Bus User Message Bus Socket...
Nov 25 18:32:02 np0005535838 systemd[77281]: Starting Create User's Volatile Files and Directories...
Nov 25 18:32:02 np0005535838 systemd[77281]: Listening on D-Bus User Message Bus Socket.
Nov 25 18:32:02 np0005535838 systemd[77281]: Reached target Sockets.
Nov 25 18:32:02 np0005535838 systemd-logind[789]: New session 23 of user ceph-admin.
Nov 25 18:32:02 np0005535838 systemd[77281]: Finished Create User's Volatile Files and Directories.
Nov 25 18:32:02 np0005535838 systemd[77281]: Reached target Basic System.
Nov 25 18:32:02 np0005535838 systemd[77281]: Reached target Main User Target.
Nov 25 18:32:02 np0005535838 systemd[77281]: Startup finished in 165ms.
Nov 25 18:32:02 np0005535838 systemd[1]: Started User Manager for UID 42477.
Nov 25 18:32:02 np0005535838 systemd[1]: Started Session 21 of User ceph-admin.
Nov 25 18:32:02 np0005535838 systemd[1]: Started Session 23 of User ceph-admin.
Nov 25 18:32:02 np0005535838 systemd-logind[789]: New session 24 of user ceph-admin.
Nov 25 18:32:02 np0005535838 systemd[1]: Started Session 24 of User ceph-admin.
Nov 25 18:32:03 np0005535838 systemd-logind[789]: New session 25 of user ceph-admin.
Nov 25 18:32:03 np0005535838 systemd[1]: Started Session 25 of User ceph-admin.
Nov 25 18:32:03 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 25 18:32:03 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 25 18:32:03 np0005535838 systemd-logind[789]: New session 26 of user ceph-admin.
Nov 25 18:32:03 np0005535838 systemd[1]: Started Session 26 of User ceph-admin.
Nov 25 18:32:03 np0005535838 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 18:32:04 np0005535838 systemd-logind[789]: New session 27 of user ceph-admin.
Nov 25 18:32:04 np0005535838 systemd[1]: Started Session 27 of User ceph-admin.
Nov 25 18:32:04 np0005535838 ceph-mon[75654]: Deploying cephadm binary to compute-0
Nov 25 18:32:04 np0005535838 systemd-logind[789]: New session 28 of user ceph-admin.
Nov 25 18:32:04 np0005535838 systemd[1]: Started Session 28 of User ceph-admin.
Nov 25 18:32:05 np0005535838 systemd-logind[789]: New session 29 of user ceph-admin.
Nov 25 18:32:05 np0005535838 systemd[1]: Started Session 29 of User ceph-admin.
Nov 25 18:32:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052964 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:32:05 np0005535838 systemd-logind[789]: New session 30 of user ceph-admin.
Nov 25 18:32:05 np0005535838 systemd[1]: Started Session 30 of User ceph-admin.
Nov 25 18:32:05 np0005535838 systemd-logind[789]: New session 31 of user ceph-admin.
Nov 25 18:32:05 np0005535838 systemd[1]: Started Session 31 of User ceph-admin.
Nov 25 18:32:05 np0005535838 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 18:32:06 np0005535838 systemd-logind[789]: New session 32 of user ceph-admin.
Nov 25 18:32:06 np0005535838 systemd[1]: Started Session 32 of User ceph-admin.
Nov 25 18:32:06 np0005535838 systemd-logind[789]: New session 33 of user ceph-admin.
Nov 25 18:32:06 np0005535838 systemd[1]: Started Session 33 of User ceph-admin.
Nov 25 18:32:07 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 18:32:07 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:07 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Added host compute-0
Nov 25 18:32:07 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 18:32:07 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 18:32:07 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 25 18:32:07 np0005535838 fervent_carver[77247]: Added host 'compute-0' with addr '192.168.122.100'
Nov 25 18:32:07 np0005535838 systemd[1]: libpod-0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53.scope: Deactivated successfully.
Nov 25 18:32:07 np0005535838 podman[77231]: 2025-11-25 23:32:07.392822858 +0000 UTC m=+6.286831284 container died 0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53 (image=quay.io/ceph/ceph:v18, name=fervent_carver, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 18:32:07 np0005535838 systemd[1]: var-lib-containers-storage-overlay-f38e401c6be48f144acbe3d96348e5770b32d2ce483063e17922ca325ee76079-merged.mount: Deactivated successfully.
Nov 25 18:32:07 np0005535838 podman[77231]: 2025-11-25 23:32:07.444221355 +0000 UTC m=+6.338229781 container remove 0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53 (image=quay.io/ceph/ceph:v18, name=fervent_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 18:32:07 np0005535838 systemd[1]: libpod-conmon-0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53.scope: Deactivated successfully.
Nov 25 18:32:07 np0005535838 podman[77930]: 2025-11-25 23:32:07.526279484 +0000 UTC m=+0.053714621 container create 76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4 (image=quay.io/ceph/ceph:v18, name=bold_torvalds, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:07 np0005535838 systemd[1]: Started libpod-conmon-76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4.scope.
Nov 25 18:32:07 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:07 np0005535838 podman[77930]: 2025-11-25 23:32:07.510774429 +0000 UTC m=+0.038209546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72bac06a2ac875cc0d531d62ec22f3b4a2f3de60183c3122b85f13012a806ca0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72bac06a2ac875cc0d531d62ec22f3b4a2f3de60183c3122b85f13012a806ca0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72bac06a2ac875cc0d531d62ec22f3b4a2f3de60183c3122b85f13012a806ca0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:07 np0005535838 podman[77930]: 2025-11-25 23:32:07.620998442 +0000 UTC m=+0.148433579 container init 76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4 (image=quay.io/ceph/ceph:v18, name=bold_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:32:07 np0005535838 podman[77930]: 2025-11-25 23:32:07.631041882 +0000 UTC m=+0.158476999 container start 76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4 (image=quay.io/ceph/ceph:v18, name=bold_torvalds, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:32:07 np0005535838 podman[77930]: 2025-11-25 23:32:07.634250857 +0000 UTC m=+0.161685994 container attach 76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4 (image=quay.io/ceph/ceph:v18, name=bold_torvalds, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:07 np0005535838 podman[78047]: 2025-11-25 23:32:07.897202454 +0000 UTC m=+0.043695021 container create b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead (image=quay.io/ceph/ceph:v18, name=flamboyant_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 18:32:07 np0005535838 systemd[1]: Started libpod-conmon-b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead.scope.
Nov 25 18:32:07 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:07 np0005535838 podman[78047]: 2025-11-25 23:32:07.880361773 +0000 UTC m=+0.026854360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:07 np0005535838 podman[78047]: 2025-11-25 23:32:07.973752715 +0000 UTC m=+0.120245352 container init b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead (image=quay.io/ceph/ceph:v18, name=flamboyant_mccarthy, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:32:07 np0005535838 podman[78047]: 2025-11-25 23:32:07.979394756 +0000 UTC m=+0.125887363 container start b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead (image=quay.io/ceph/ceph:v18, name=flamboyant_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:32:07 np0005535838 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 18:32:07 np0005535838 podman[78047]: 2025-11-25 23:32:07.983372863 +0000 UTC m=+0.129865460 container attach b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead (image=quay.io/ceph/ceph:v18, name=flamboyant_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 18:32:08 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:32:08 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 25 18:32:08 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 25 18:32:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 18:32:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:08 np0005535838 bold_torvalds[77989]: Scheduled mon update...
Nov 25 18:32:08 np0005535838 systemd[1]: libpod-76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4.scope: Deactivated successfully.
Nov 25 18:32:08 np0005535838 podman[77930]: 2025-11-25 23:32:08.162279427 +0000 UTC m=+0.689714584 container died 76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4 (image=quay.io/ceph/ceph:v18, name=bold_torvalds, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 18:32:08 np0005535838 systemd[1]: var-lib-containers-storage-overlay-72bac06a2ac875cc0d531d62ec22f3b4a2f3de60183c3122b85f13012a806ca0-merged.mount: Deactivated successfully.
Nov 25 18:32:08 np0005535838 podman[77930]: 2025-11-25 23:32:08.220509678 +0000 UTC m=+0.747944825 container remove 76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4 (image=quay.io/ceph/ceph:v18, name=bold_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 18:32:08 np0005535838 systemd[1]: libpod-conmon-76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4.scope: Deactivated successfully.
Nov 25 18:32:08 np0005535838 flamboyant_mccarthy[78082]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 25 18:32:08 np0005535838 systemd[1]: libpod-b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead.scope: Deactivated successfully.
Nov 25 18:32:08 np0005535838 podman[78047]: 2025-11-25 23:32:08.271622877 +0000 UTC m=+0.418115454 container died b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead (image=quay.io/ceph/ceph:v18, name=flamboyant_mccarthy, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 18:32:08 np0005535838 systemd[1]: var-lib-containers-storage-overlay-81e4dd50fedbcf6f2f874545e0d477b8ec2a0b2ca1146eb470c148886a26f988-merged.mount: Deactivated successfully.
Nov 25 18:32:08 np0005535838 podman[78047]: 2025-11-25 23:32:08.322454839 +0000 UTC m=+0.468947446 container remove b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead (image=quay.io/ceph/ceph:v18, name=flamboyant_mccarthy, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 18:32:08 np0005535838 systemd[1]: libpod-conmon-b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead.scope: Deactivated successfully.
Nov 25 18:32:08 np0005535838 podman[78101]: 2025-11-25 23:32:08.342541418 +0000 UTC m=+0.091296547 container create 9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d (image=quay.io/ceph/ceph:v18, name=unruffled_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 18:32:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 25 18:32:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:08 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:08 np0005535838 ceph-mon[75654]: Added host compute-0
Nov 25 18:32:08 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:08 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:08 np0005535838 systemd[1]: Started libpod-conmon-9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d.scope.
Nov 25 18:32:08 np0005535838 podman[78101]: 2025-11-25 23:32:08.291783928 +0000 UTC m=+0.040539137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:08 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:08 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada961579c74cd1e1e65ab8aa77dbfa311f4d5409fc53c58344a9ab6f3f79c4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:08 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada961579c74cd1e1e65ab8aa77dbfa311f4d5409fc53c58344a9ab6f3f79c4e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:08 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada961579c74cd1e1e65ab8aa77dbfa311f4d5409fc53c58344a9ab6f3f79c4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:08 np0005535838 podman[78101]: 2025-11-25 23:32:08.428745827 +0000 UTC m=+0.177500956 container init 9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d (image=quay.io/ceph/ceph:v18, name=unruffled_kapitsa, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 18:32:08 np0005535838 podman[78101]: 2025-11-25 23:32:08.434749829 +0000 UTC m=+0.183504948 container start 9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d (image=quay.io/ceph/ceph:v18, name=unruffled_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 18:32:08 np0005535838 podman[78101]: 2025-11-25 23:32:08.437947485 +0000 UTC m=+0.186702604 container attach 9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d (image=quay.io/ceph/ceph:v18, name=unruffled_kapitsa, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:32:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:32:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:09 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:32:09 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 25 18:32:09 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 25 18:32:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 18:32:09 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:09 np0005535838 unruffled_kapitsa[78128]: Scheduled mgr update...
Nov 25 18:32:09 np0005535838 systemd[1]: libpod-9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d.scope: Deactivated successfully.
Nov 25 18:32:09 np0005535838 podman[78101]: 2025-11-25 23:32:09.061547715 +0000 UTC m=+0.810302854 container died 9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d (image=quay.io/ceph/ceph:v18, name=unruffled_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:09 np0005535838 systemd[1]: var-lib-containers-storage-overlay-ada961579c74cd1e1e65ab8aa77dbfa311f4d5409fc53c58344a9ab6f3f79c4e-merged.mount: Deactivated successfully.
Nov 25 18:32:09 np0005535838 podman[78101]: 2025-11-25 23:32:09.115604414 +0000 UTC m=+0.864359573 container remove 9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d (image=quay.io/ceph/ceph:v18, name=unruffled_kapitsa, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 18:32:09 np0005535838 systemd[1]: libpod-conmon-9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d.scope: Deactivated successfully.
Nov 25 18:32:09 np0005535838 podman[78380]: 2025-11-25 23:32:09.180435431 +0000 UTC m=+0.041113512 container create 33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d (image=quay.io/ceph/ceph:v18, name=fervent_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:09 np0005535838 systemd[1]: Started libpod-conmon-33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d.scope.
Nov 25 18:32:09 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:09 np0005535838 podman[78380]: 2025-11-25 23:32:09.161565306 +0000 UTC m=+0.022243357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:09 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f21df22da39ee1569737662165a6ab090a7a9e15ea4c85fc7c0ab8b2be3dc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:09 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f21df22da39ee1569737662165a6ab090a7a9e15ea4c85fc7c0ab8b2be3dc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:09 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f21df22da39ee1569737662165a6ab090a7a9e15ea4c85fc7c0ab8b2be3dc3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:09 np0005535838 podman[78380]: 2025-11-25 23:32:09.278700624 +0000 UTC m=+0.139378685 container init 33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d (image=quay.io/ceph/ceph:v18, name=fervent_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 18:32:09 np0005535838 podman[78380]: 2025-11-25 23:32:09.289403762 +0000 UTC m=+0.150081793 container start 33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d (image=quay.io/ceph/ceph:v18, name=fervent_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 18:32:09 np0005535838 podman[78380]: 2025-11-25 23:32:09.292652108 +0000 UTC m=+0.153330179 container attach 33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d (image=quay.io/ceph/ceph:v18, name=fervent_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 18:32:09 np0005535838 ceph-mon[75654]: Saving service mon spec with placement count:5
Nov 25 18:32:09 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:09 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:09 np0005535838 podman[78499]: 2025-11-25 23:32:09.794844496 +0000 UTC m=+0.084759582 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 18:32:09 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:32:09 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Saving service crash spec with placement *
Nov 25 18:32:09 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 25 18:32:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 18:32:09 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:09 np0005535838 fervent_chatterjee[78404]: Scheduled crash update...
Nov 25 18:32:09 np0005535838 systemd[1]: libpod-33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d.scope: Deactivated successfully.
Nov 25 18:32:09 np0005535838 conmon[78404]: conmon 33638c6bed675d27bdc1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d.scope/container/memory.events
Nov 25 18:32:09 np0005535838 podman[78380]: 2025-11-25 23:32:09.824147662 +0000 UTC m=+0.684825743 container died 33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d (image=quay.io/ceph/ceph:v18, name=fervent_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:09 np0005535838 systemd[1]: var-lib-containers-storage-overlay-32f21df22da39ee1569737662165a6ab090a7a9e15ea4c85fc7c0ab8b2be3dc3-merged.mount: Deactivated successfully.
Nov 25 18:32:09 np0005535838 podman[78380]: 2025-11-25 23:32:09.882713511 +0000 UTC m=+0.743391592 container remove 33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d (image=quay.io/ceph/ceph:v18, name=fervent_chatterjee, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 18:32:09 np0005535838 systemd[1]: libpod-conmon-33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d.scope: Deactivated successfully.
Nov 25 18:32:09 np0005535838 podman[78535]: 2025-11-25 23:32:09.957842264 +0000 UTC m=+0.053341740 container create eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b (image=quay.io/ceph/ceph:v18, name=sad_booth, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 18:32:09 np0005535838 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 18:32:10 np0005535838 systemd[1]: Started libpod-conmon-eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b.scope.
Nov 25 18:32:10 np0005535838 podman[78535]: 2025-11-25 23:32:09.93156761 +0000 UTC m=+0.027067116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:10 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:10 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707c05e1e4299089a0a9221804fb86cc4b77b08b687e65fb08231eeba73c048b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:10 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707c05e1e4299089a0a9221804fb86cc4b77b08b687e65fb08231eeba73c048b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:10 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707c05e1e4299089a0a9221804fb86cc4b77b08b687e65fb08231eeba73c048b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:10 np0005535838 podman[78535]: 2025-11-25 23:32:10.066712312 +0000 UTC m=+0.162211818 container init eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b (image=quay.io/ceph/ceph:v18, name=sad_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:10 np0005535838 podman[78535]: 2025-11-25 23:32:10.078112617 +0000 UTC m=+0.173612113 container start eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b (image=quay.io/ceph/ceph:v18, name=sad_booth, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:10 np0005535838 podman[78535]: 2025-11-25 23:32:10.081900799 +0000 UTC m=+0.177400285 container attach eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b (image=quay.io/ceph/ceph:v18, name=sad_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 18:32:10 np0005535838 podman[78499]: 2025-11-25 23:32:10.124528012 +0000 UTC m=+0.414443058 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:32:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:32:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:32:10 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:10 np0005535838 ceph-mon[75654]: Saving service mgr spec with placement count:2
Nov 25 18:32:10 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:10 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 25 18:32:10 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3121850749' entity='client.admin' 
Nov 25 18:32:10 np0005535838 systemd[1]: libpod-eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b.scope: Deactivated successfully.
Nov 25 18:32:10 np0005535838 podman[78709]: 2025-11-25 23:32:10.763976197 +0000 UTC m=+0.033237421 container died eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b (image=quay.io/ceph/ceph:v18, name=sad_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 18:32:10 np0005535838 systemd[1]: var-lib-containers-storage-overlay-707c05e1e4299089a0a9221804fb86cc4b77b08b687e65fb08231eeba73c048b-merged.mount: Deactivated successfully.
Nov 25 18:32:10 np0005535838 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78736 (sysctl)
Nov 25 18:32:10 np0005535838 podman[78709]: 2025-11-25 23:32:10.835956996 +0000 UTC m=+0.105218150 container remove eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b (image=quay.io/ceph/ceph:v18, name=sad_booth, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 18:32:10 np0005535838 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 25 18:32:10 np0005535838 systemd[1]: libpod-conmon-eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b.scope: Deactivated successfully.
Nov 25 18:32:10 np0005535838 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 25 18:32:10 np0005535838 podman[78739]: 2025-11-25 23:32:10.91856877 +0000 UTC m=+0.054679916 container create 659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b (image=quay.io/ceph/ceph:v18, name=nervous_galileo, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:32:10 np0005535838 systemd[1]: Started libpod-conmon-659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b.scope.
Nov 25 18:32:10 np0005535838 podman[78739]: 2025-11-25 23:32:10.892513691 +0000 UTC m=+0.028624857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:10 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:10 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3f56440d70701ba28ee65a0327e0ca7941e1bb79107d18a15daf4c8f3b7a27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3f56440d70701ba28ee65a0327e0ca7941e1bb79107d18a15daf4c8f3b7a27/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3f56440d70701ba28ee65a0327e0ca7941e1bb79107d18a15daf4c8f3b7a27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:11 np0005535838 podman[78739]: 2025-11-25 23:32:11.016646798 +0000 UTC m=+0.152757994 container init 659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b (image=quay.io/ceph/ceph:v18, name=nervous_galileo, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 18:32:11 np0005535838 podman[78739]: 2025-11-25 23:32:11.028547687 +0000 UTC m=+0.164658843 container start 659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b (image=quay.io/ceph/ceph:v18, name=nervous_galileo, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 18:32:11 np0005535838 podman[78739]: 2025-11-25 23:32:11.032868202 +0000 UTC m=+0.168979358 container attach 659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b (image=quay.io/ceph/ceph:v18, name=nervous_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 18:32:11 np0005535838 ceph-mon[75654]: Saving service crash spec with placement *
Nov 25 18:32:11 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3121850749' entity='client.admin' 
Nov 25 18:32:11 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:32:11 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 25 18:32:11 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:11 np0005535838 systemd[1]: libpod-659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b.scope: Deactivated successfully.
Nov 25 18:32:11 np0005535838 podman[78739]: 2025-11-25 23:32:11.598747976 +0000 UTC m=+0.734859092 container died 659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b (image=quay.io/ceph/ceph:v18, name=nervous_galileo, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 18:32:11 np0005535838 systemd[1]: var-lib-containers-storage-overlay-8b3f56440d70701ba28ee65a0327e0ca7941e1bb79107d18a15daf4c8f3b7a27-merged.mount: Deactivated successfully.
Nov 25 18:32:11 np0005535838 podman[78739]: 2025-11-25 23:32:11.646425455 +0000 UTC m=+0.782536571 container remove 659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b (image=quay.io/ceph/ceph:v18, name=nervous_galileo, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:11 np0005535838 systemd[1]: libpod-conmon-659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b.scope: Deactivated successfully.
Nov 25 18:32:11 np0005535838 podman[78913]: 2025-11-25 23:32:11.728633947 +0000 UTC m=+0.052129768 container create f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782 (image=quay.io/ceph/ceph:v18, name=keen_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:11 np0005535838 systemd[1]: Started libpod-conmon-f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782.scope.
Nov 25 18:32:11 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:11 np0005535838 podman[78913]: 2025-11-25 23:32:11.703127424 +0000 UTC m=+0.026623245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c2e8b26e91297aa7e5828858ec44eb5ad9bcd92f4a1b8bfdaa3bde117d81ff/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c2e8b26e91297aa7e5828858ec44eb5ad9bcd92f4a1b8bfdaa3bde117d81ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c2e8b26e91297aa7e5828858ec44eb5ad9bcd92f4a1b8bfdaa3bde117d81ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:11 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:32:11 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:11 np0005535838 podman[78913]: 2025-11-25 23:32:11.815342131 +0000 UTC m=+0.138837972 container init f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782 (image=quay.io/ceph/ceph:v18, name=keen_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:32:11 np0005535838 podman[78913]: 2025-11-25 23:32:11.822144543 +0000 UTC m=+0.145640354 container start f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782 (image=quay.io/ceph/ceph:v18, name=keen_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 18:32:11 np0005535838 podman[78913]: 2025-11-25 23:32:11.825918715 +0000 UTC m=+0.149414526 container attach f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782 (image=quay.io/ceph/ceph:v18, name=keen_lewin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:11 np0005535838 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 18:32:12 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:32:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 18:32:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:12 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Added label _admin to host compute-0
Nov 25 18:32:12 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 25 18:32:12 np0005535838 keen_lewin[78946]: Added label _admin to host compute-0
Nov 25 18:32:12 np0005535838 systemd[1]: libpod-f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782.scope: Deactivated successfully.
Nov 25 18:32:12 np0005535838 podman[78913]: 2025-11-25 23:32:12.398350285 +0000 UTC m=+0.721846106 container died f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782 (image=quay.io/ceph/ceph:v18, name=keen_lewin, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 18:32:12 np0005535838 systemd[1]: var-lib-containers-storage-overlay-14c2e8b26e91297aa7e5828858ec44eb5ad9bcd92f4a1b8bfdaa3bde117d81ff-merged.mount: Deactivated successfully.
Nov 25 18:32:12 np0005535838 podman[78913]: 2025-11-25 23:32:12.444550103 +0000 UTC m=+0.768045924 container remove f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782 (image=quay.io/ceph/ceph:v18, name=keen_lewin, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:12 np0005535838 systemd[1]: libpod-conmon-f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782.scope: Deactivated successfully.
Nov 25 18:32:12 np0005535838 podman[79109]: 2025-11-25 23:32:12.493303338 +0000 UTC m=+0.093013343 container create bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:12 np0005535838 podman[79134]: 2025-11-25 23:32:12.535047127 +0000 UTC m=+0.068578158 container create cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d (image=quay.io/ceph/ceph:v18, name=frosty_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 18:32:12 np0005535838 systemd[1]: Started libpod-conmon-bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db.scope.
Nov 25 18:32:12 np0005535838 podman[79109]: 2025-11-25 23:32:12.464950109 +0000 UTC m=+0.064660104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:12 np0005535838 systemd[1]: Started libpod-conmon-cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d.scope.
Nov 25 18:32:12 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:12 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:12 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:12 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:12 np0005535838 ceph-mon[75654]: Added label _admin to host compute-0
Nov 25 18:32:12 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:12 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf7be4cdb4210f473a1c225aca7cda7e771261aa4ccba49bab5a00ee57347bd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:12 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf7be4cdb4210f473a1c225aca7cda7e771261aa4ccba49bab5a00ee57347bd1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:12 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf7be4cdb4210f473a1c225aca7cda7e771261aa4ccba49bab5a00ee57347bd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:12 np0005535838 podman[79109]: 2025-11-25 23:32:12.600935463 +0000 UTC m=+0.200645488 container init bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:12 np0005535838 podman[79134]: 2025-11-25 23:32:12.511443055 +0000 UTC m=+0.044974176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:12 np0005535838 podman[79134]: 2025-11-25 23:32:12.616155601 +0000 UTC m=+0.149686682 container init cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d (image=quay.io/ceph/ceph:v18, name=frosty_gagarin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 18:32:12 np0005535838 podman[79109]: 2025-11-25 23:32:12.620036005 +0000 UTC m=+0.219745990 container start bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:12 np0005535838 podman[79109]: 2025-11-25 23:32:12.624622188 +0000 UTC m=+0.224332193 container attach bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 18:32:12 np0005535838 vibrant_torvalds[79152]: 167 167
Nov 25 18:32:12 np0005535838 systemd[1]: libpod-bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db.scope: Deactivated successfully.
Nov 25 18:32:12 np0005535838 podman[79109]: 2025-11-25 23:32:12.629653633 +0000 UTC m=+0.229363628 container died bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 18:32:12 np0005535838 podman[79134]: 2025-11-25 23:32:12.630464194 +0000 UTC m=+0.163995215 container start cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d (image=quay.io/ceph/ceph:v18, name=frosty_gagarin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 18:32:12 np0005535838 podman[79134]: 2025-11-25 23:32:12.641720266 +0000 UTC m=+0.175251337 container attach cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d (image=quay.io/ceph/ceph:v18, name=frosty_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 18:32:12 np0005535838 systemd[1]: var-lib-containers-storage-overlay-065e2d521e13bd5b3beb8187d8f66c5452a45527020845a22b7542444ca49d8a-merged.mount: Deactivated successfully.
Nov 25 18:32:12 np0005535838 podman[79109]: 2025-11-25 23:32:12.680610368 +0000 UTC m=+0.280320373 container remove bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:32:12 np0005535838 systemd[1]: libpod-conmon-bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db.scope: Deactivated successfully.
Nov 25 18:32:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 25 18:32:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1524641099' entity='client.admin' 
Nov 25 18:32:13 np0005535838 systemd[1]: libpod-cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d.scope: Deactivated successfully.
Nov 25 18:32:13 np0005535838 podman[79197]: 2025-11-25 23:32:13.229936308 +0000 UTC m=+0.029444930 container died cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d (image=quay.io/ceph/ceph:v18, name=frosty_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:32:13 np0005535838 systemd[1]: var-lib-containers-storage-overlay-bf7be4cdb4210f473a1c225aca7cda7e771261aa4ccba49bab5a00ee57347bd1-merged.mount: Deactivated successfully.
Nov 25 18:32:13 np0005535838 podman[79197]: 2025-11-25 23:32:13.281557002 +0000 UTC m=+0.081065684 container remove cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d (image=quay.io/ceph/ceph:v18, name=frosty_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:13 np0005535838 systemd[1]: libpod-conmon-cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d.scope: Deactivated successfully.
Nov 25 18:32:13 np0005535838 podman[79211]: 2025-11-25 23:32:13.376666731 +0000 UTC m=+0.061013336 container create 1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d (image=quay.io/ceph/ceph:v18, name=cool_grothendieck, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 18:32:13 np0005535838 systemd[1]: Started libpod-conmon-1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d.scope.
Nov 25 18:32:13 np0005535838 podman[79211]: 2025-11-25 23:32:13.347461818 +0000 UTC m=+0.031808473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:13 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:13 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b807784ebe330de1fca1bf4ed3a61b29cd54376dfb4298eabf1004dd361b78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:13 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b807784ebe330de1fca1bf4ed3a61b29cd54376dfb4298eabf1004dd361b78/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:13 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b807784ebe330de1fca1bf4ed3a61b29cd54376dfb4298eabf1004dd361b78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:13 np0005535838 podman[79211]: 2025-11-25 23:32:13.471829651 +0000 UTC m=+0.156176266 container init 1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d (image=quay.io/ceph/ceph:v18, name=cool_grothendieck, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:13 np0005535838 podman[79211]: 2025-11-25 23:32:13.482769764 +0000 UTC m=+0.167116369 container start 1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d (image=quay.io/ceph/ceph:v18, name=cool_grothendieck, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 18:32:13 np0005535838 podman[79211]: 2025-11-25 23:32:13.487451599 +0000 UTC m=+0.171798174 container attach 1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d (image=quay.io/ceph/ceph:v18, name=cool_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:32:13 np0005535838 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 18:32:14 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 25 18:32:14 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3552777047' entity='client.admin' 
Nov 25 18:32:14 np0005535838 cool_grothendieck[79227]: set mgr/dashboard/cluster/status
Nov 25 18:32:14 np0005535838 systemd[1]: libpod-1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d.scope: Deactivated successfully.
Nov 25 18:32:14 np0005535838 podman[79211]: 2025-11-25 23:32:14.168557172 +0000 UTC m=+0.852903757 container died 1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d (image=quay.io/ceph/ceph:v18, name=cool_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:14 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/1524641099' entity='client.admin' 
Nov 25 18:32:14 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3552777047' entity='client.admin' 
Nov 25 18:32:14 np0005535838 systemd[1]: var-lib-containers-storage-overlay-09b807784ebe330de1fca1bf4ed3a61b29cd54376dfb4298eabf1004dd361b78-merged.mount: Deactivated successfully.
Nov 25 18:32:14 np0005535838 podman[79211]: 2025-11-25 23:32:14.225274101 +0000 UTC m=+0.909620666 container remove 1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d (image=quay.io/ceph/ceph:v18, name=cool_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:14 np0005535838 systemd[1]: libpod-conmon-1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d.scope: Deactivated successfully.
Nov 25 18:32:14 np0005535838 podman[79273]: 2025-11-25 23:32:14.440075358 +0000 UTC m=+0.065511967 container create 3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:14 np0005535838 systemd[1]: Started libpod-conmon-3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be.scope.
Nov 25 18:32:14 np0005535838 podman[79273]: 2025-11-25 23:32:14.413077934 +0000 UTC m=+0.038514583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:14 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:14 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0693bf55e49f9dc19ec84d17ac1cc19c6cb7ff5c4ce40dcf941b3282f0d2a1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:14 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0693bf55e49f9dc19ec84d17ac1cc19c6cb7ff5c4ce40dcf941b3282f0d2a1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:14 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0693bf55e49f9dc19ec84d17ac1cc19c6cb7ff5c4ce40dcf941b3282f0d2a1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:14 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0693bf55e49f9dc19ec84d17ac1cc19c6cb7ff5c4ce40dcf941b3282f0d2a1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:14 np0005535838 podman[79273]: 2025-11-25 23:32:14.532589607 +0000 UTC m=+0.158026176 container init 3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 18:32:14 np0005535838 podman[79273]: 2025-11-25 23:32:14.548057282 +0000 UTC m=+0.173493891 container start 3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 18:32:14 np0005535838 podman[79273]: 2025-11-25 23:32:14.552251334 +0000 UTC m=+0.177687923 container attach 3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 18:32:14 np0005535838 python3[79320]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:32:14 np0005535838 podman[79321]: 2025-11-25 23:32:14.860535605 +0000 UTC m=+0.094216616 container create 9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9 (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 18:32:14 np0005535838 systemd[1]: Started libpod-conmon-9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9.scope.
Nov 25 18:32:14 np0005535838 podman[79321]: 2025-11-25 23:32:14.831366273 +0000 UTC m=+0.065047294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:14 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:14 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d16fa754e3f7b4ff9c93bd2d3181be7eecac2b14e238652380e3f593ec0de8d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:14 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d16fa754e3f7b4ff9c93bd2d3181be7eecac2b14e238652380e3f593ec0de8d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:14 np0005535838 podman[79321]: 2025-11-25 23:32:14.95586505 +0000 UTC m=+0.189546131 container init 9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9 (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 18:32:14 np0005535838 podman[79321]: 2025-11-25 23:32:14.963419673 +0000 UTC m=+0.197100684 container start 9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9 (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:32:14 np0005535838 podman[79321]: 2025-11-25 23:32:14.967957144 +0000 UTC m=+0.201638195 container attach 9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9 (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:32:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 25 18:32:15 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3228972506' entity='client.admin' 
Nov 25 18:32:15 np0005535838 systemd[1]: libpod-9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9.scope: Deactivated successfully.
Nov 25 18:32:15 np0005535838 conmon[79337]: conmon 9a4ecace5f080e974217 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9.scope/container/memory.events
Nov 25 18:32:15 np0005535838 podman[79321]: 2025-11-25 23:32:15.565391383 +0000 UTC m=+0.799072444 container died 9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9 (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 18:32:15 np0005535838 systemd[1]: var-lib-containers-storage-overlay-9d16fa754e3f7b4ff9c93bd2d3181be7eecac2b14e238652380e3f593ec0de8d-merged.mount: Deactivated successfully.
Nov 25 18:32:15 np0005535838 podman[79321]: 2025-11-25 23:32:15.618856817 +0000 UTC m=+0.852537788 container remove 9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9 (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 18:32:15 np0005535838 systemd[1]: libpod-conmon-9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9.scope: Deactivated successfully.
Nov 25 18:32:15 np0005535838 ceph-mgr[75954]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 25 18:32:15 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:15 np0005535838 ceph-mon[75654]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]: [
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:    {
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:        "available": false,
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:        "ceph_device": false,
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:        "lsm_data": {},
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:        "lvs": [],
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:        "path": "/dev/sr0",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:        "rejected_reasons": [
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "Insufficient space (<5GB)",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "Has a FileSystem"
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:        ],
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:        "sys_api": {
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "actuators": null,
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "device_nodes": "sr0",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "devname": "sr0",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "human_readable_size": "482.00 KB",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "id_bus": "ata",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "model": "QEMU DVD-ROM",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "nr_requests": "2",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "parent": "/dev/sr0",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "partitions": {},
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "path": "/dev/sr0",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "removable": "1",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "rev": "2.5+",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "ro": "0",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "rotational": "1",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "sas_address": "",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "sas_device_handle": "",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "scheduler_mode": "mq-deadline",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "sectors": 0,
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "sectorsize": "2048",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "size": 493568.0,
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "support_discard": "2048",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "type": "disk",
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:            "vendor": "QEMU"
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:        }
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]:    }
Nov 25 18:32:16 np0005535838 elastic_hermann[79290]: ]
Nov 25 18:32:16 np0005535838 systemd[1]: libpod-3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be.scope: Deactivated successfully.
Nov 25 18:32:16 np0005535838 podman[79273]: 2025-11-25 23:32:16.09181519 +0000 UTC m=+1.717251769 container died 3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 18:32:16 np0005535838 systemd[1]: libpod-3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be.scope: Consumed 1.587s CPU time.
Nov 25 18:32:16 np0005535838 systemd[1]: var-lib-containers-storage-overlay-e0693bf55e49f9dc19ec84d17ac1cc19c6cb7ff5c4ce40dcf941b3282f0d2a1c-merged.mount: Deactivated successfully.
Nov 25 18:32:16 np0005535838 podman[79273]: 2025-11-25 23:32:16.167583712 +0000 UTC m=+1.793020321 container remove 3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 18:32:16 np0005535838 systemd[1]: libpod-conmon-3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be.scope: Deactivated successfully.
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:32:16 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 25 18:32:16 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3228972506' entity='client.admin' 
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:32:16 np0005535838 ceph-mon[75654]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 18:32:16 np0005535838 ansible-async_wrapper.py[81542]: Invoked with j144314215060 30 /home/zuul/.ansible/tmp/ansible-tmp-1764113536.0793626-36633-250666746881010/AnsiballZ_command.py _
Nov 25 18:32:16 np0005535838 ansible-async_wrapper.py[81593]: Starting module and watcher
Nov 25 18:32:16 np0005535838 ansible-async_wrapper.py[81593]: Start watching 81594 (30)
Nov 25 18:32:16 np0005535838 ansible-async_wrapper.py[81594]: Start module (81594)
Nov 25 18:32:16 np0005535838 ansible-async_wrapper.py[81542]: Return async_wrapper task started.
Nov 25 18:32:16 np0005535838 python3[81596]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:32:17 np0005535838 podman[81655]: 2025-11-25 23:32:17.051644052 +0000 UTC m=+0.056318320 container create 7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819 (image=quay.io/ceph/ceph:v18, name=determined_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 18:32:17 np0005535838 systemd[1]: Started libpod-conmon-7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819.scope.
Nov 25 18:32:17 np0005535838 podman[81655]: 2025-11-25 23:32:17.022291515 +0000 UTC m=+0.026965883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:17 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:17 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2873d38b015dbe2f35d30ecae2014be736c8df2573dc29b1e6459dcf3baea5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:17 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2873d38b015dbe2f35d30ecae2014be736c8df2573dc29b1e6459dcf3baea5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:17 np0005535838 podman[81655]: 2025-11-25 23:32:17.148946059 +0000 UTC m=+0.153620387 container init 7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819 (image=quay.io/ceph/ceph:v18, name=determined_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 18:32:17 np0005535838 podman[81655]: 2025-11-25 23:32:17.160690544 +0000 UTC m=+0.165364812 container start 7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819 (image=quay.io/ceph/ceph:v18, name=determined_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 18:32:17 np0005535838 podman[81655]: 2025-11-25 23:32:17.163771976 +0000 UTC m=+0.168446244 container attach 7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819 (image=quay.io/ceph/ceph:v18, name=determined_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:32:17 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.conf
Nov 25 18:32:17 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.conf
Nov 25 18:32:17 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 18:32:17 np0005535838 determined_hertz[81704]: 
Nov 25 18:32:17 np0005535838 determined_hertz[81704]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 18:32:17 np0005535838 systemd[1]: libpod-7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819.scope: Deactivated successfully.
Nov 25 18:32:17 np0005535838 podman[81655]: 2025-11-25 23:32:17.673721622 +0000 UTC m=+0.678395880 container died 7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819 (image=quay.io/ceph/ceph:v18, name=determined_hertz, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:17 np0005535838 systemd[1]: var-lib-containers-storage-overlay-ec2873d38b015dbe2f35d30ecae2014be736c8df2573dc29b1e6459dcf3baea5-merged.mount: Deactivated successfully.
Nov 25 18:32:17 np0005535838 podman[81655]: 2025-11-25 23:32:17.723476245 +0000 UTC m=+0.728150553 container remove 7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819 (image=quay.io/ceph/ceph:v18, name=determined_hertz, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:17 np0005535838 systemd[1]: libpod-conmon-7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819.scope: Deactivated successfully.
Nov 25 18:32:17 np0005535838 ansible-async_wrapper.py[81594]: Module complete (81594)
Nov 25 18:32:17 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:18 np0005535838 python3[82216]: ansible-ansible.legacy.async_status Invoked with jid=j144314215060.81542 mode=status _async_dir=/root/.ansible_async
Nov 25 18:32:18 np0005535838 python3[82364]: ansible-ansible.legacy.async_status Invoked with jid=j144314215060.81542 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 18:32:18 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 18:32:18 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 18:32:19 np0005535838 ceph-mon[75654]: Updating compute-0:/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.conf
Nov 25 18:32:19 np0005535838 python3[82543]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 18:32:19 np0005535838 python3[82730]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:32:19 np0005535838 podman[82796]: 2025-11-25 23:32:19.588948206 +0000 UTC m=+0.021230140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:19 np0005535838 podman[82796]: 2025-11-25 23:32:19.958874489 +0000 UTC m=+0.391156363 container create 6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9 (image=quay.io/ceph/ceph:v18, name=nifty_montalcini, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:32:19 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:20 np0005535838 systemd[1]: Started libpod-conmon-6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9.scope.
Nov 25 18:32:20 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fc6f375395c171c2839986ba3dd460afe166443b7ace335b54c3240f241615/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fc6f375395c171c2839986ba3dd460afe166443b7ace335b54c3240f241615/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fc6f375395c171c2839986ba3dd460afe166443b7ace335b54c3240f241615/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:20 np0005535838 ceph-mon[75654]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 18:32:20 np0005535838 podman[82796]: 2025-11-25 23:32:20.066856142 +0000 UTC m=+0.499137996 container init 6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9 (image=quay.io/ceph/ceph:v18, name=nifty_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 18:32:20 np0005535838 podman[82796]: 2025-11-25 23:32:20.075429793 +0000 UTC m=+0.507711677 container start 6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9 (image=quay.io/ceph/ceph:v18, name=nifty_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:20 np0005535838 podman[82796]: 2025-11-25 23:32:20.079311686 +0000 UTC m=+0.511593630 container attach 6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9 (image=quay.io/ceph/ceph:v18, name=nifty_montalcini, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:32:20 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.client.admin.keyring
Nov 25 18:32:20 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.client.admin.keyring
Nov 25 18:32:20 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 18:32:20 np0005535838 nifty_montalcini[82860]: 
Nov 25 18:32:20 np0005535838 nifty_montalcini[82860]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 18:32:20 np0005535838 systemd[1]: libpod-6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9.scope: Deactivated successfully.
Nov 25 18:32:20 np0005535838 podman[82796]: 2025-11-25 23:32:20.641660626 +0000 UTC m=+1.073942480 container died 6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9 (image=quay.io/ceph/ceph:v18, name=nifty_montalcini, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:20 np0005535838 systemd[1]: var-lib-containers-storage-overlay-a2fc6f375395c171c2839986ba3dd460afe166443b7ace335b54c3240f241615-merged.mount: Deactivated successfully.
Nov 25 18:32:20 np0005535838 podman[82796]: 2025-11-25 23:32:20.686138018 +0000 UTC m=+1.118419862 container remove 6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9 (image=quay.io/ceph/ceph:v18, name=nifty_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:20 np0005535838 systemd[1]: libpod-conmon-6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9.scope: Deactivated successfully.
Nov 25 18:32:21 np0005535838 ceph-mon[75654]: Updating compute-0:/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.client.admin.keyring
Nov 25 18:32:21 np0005535838 python3[83216]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:32:21 np0005535838 podman[83259]: 2025-11-25 23:32:21.253672296 +0000 UTC m=+0.051139060 container create 42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b (image=quay.io/ceph/ceph:v18, name=objective_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:21 np0005535838 systemd[1]: Started libpod-conmon-42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b.scope.
Nov 25 18:32:21 np0005535838 podman[83259]: 2025-11-25 23:32:21.237053161 +0000 UTC m=+0.034519885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:21 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:21 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09abc300a67c9b3c1607750f4fb8b8ba4f75f8232944c489065ce391b42f9bf9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:21 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09abc300a67c9b3c1607750f4fb8b8ba4f75f8232944c489065ce391b42f9bf9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:21 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09abc300a67c9b3c1607750f4fb8b8ba4f75f8232944c489065ce391b42f9bf9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:21 np0005535838 podman[83259]: 2025-11-25 23:32:21.364742112 +0000 UTC m=+0.162208906 container init 42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b (image=quay.io/ceph/ceph:v18, name=objective_dirac, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:21 np0005535838 podman[83259]: 2025-11-25 23:32:21.375222694 +0000 UTC m=+0.172689438 container start 42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b (image=quay.io/ceph/ceph:v18, name=objective_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 18:32:21 np0005535838 podman[83259]: 2025-11-25 23:32:21.379540639 +0000 UTC m=+0.177007403 container attach 42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b (image=quay.io/ceph/ceph:v18, name=objective_dirac, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 18:32:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:32:21 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:32:21 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:32:21 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:21 np0005535838 ceph-mgr[75954]: [progress INFO root] update: starting ev 01d37128-2f77-42ad-b9c7-ed25c454262f (Updating crash deployment (+1 -> 1))
Nov 25 18:32:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 25 18:32:21 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 18:32:21 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 18:32:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:32:21 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:32:21 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 25 18:32:21 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 25 18:32:21 np0005535838 ansible-async_wrapper.py[81593]: Done in kid B.
Nov 25 18:32:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 25 18:32:21 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3516919307' entity='client.admin' 
Nov 25 18:32:21 np0005535838 systemd[1]: libpod-42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b.scope: Deactivated successfully.
Nov 25 18:32:21 np0005535838 conmon[83311]: conmon 42c79a575f0f05476a33 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b.scope/container/memory.events
Nov 25 18:32:21 np0005535838 podman[83259]: 2025-11-25 23:32:21.917053264 +0000 UTC m=+0.714520018 container died 42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b (image=quay.io/ceph/ceph:v18, name=objective_dirac, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 18:32:21 np0005535838 systemd[1]: var-lib-containers-storage-overlay-09abc300a67c9b3c1607750f4fb8b8ba4f75f8232944c489065ce391b42f9bf9-merged.mount: Deactivated successfully.
Nov 25 18:32:21 np0005535838 podman[83259]: 2025-11-25 23:32:21.973424664 +0000 UTC m=+0.770891398 container remove 42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b (image=quay.io/ceph/ceph:v18, name=objective_dirac, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 18:32:21 np0005535838 systemd[1]: libpod-conmon-42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b.scope: Deactivated successfully.
Nov 25 18:32:21 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:22 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:22 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:22 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:22 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 18:32:22 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 18:32:22 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3516919307' entity='client.admin' 
Nov 25 18:32:22 np0005535838 python3[83591]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:32:22 np0005535838 podman[83613]: 2025-11-25 23:32:22.410843686 +0000 UTC m=+0.067913601 container create 59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:22 np0005535838 podman[83625]: 2025-11-25 23:32:22.451452525 +0000 UTC m=+0.064926302 container create 3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b (image=quay.io/ceph/ceph:v18, name=eager_jennings, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:22 np0005535838 systemd[1]: Started libpod-conmon-59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794.scope.
Nov 25 18:32:22 np0005535838 podman[83613]: 2025-11-25 23:32:22.381378397 +0000 UTC m=+0.038448362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:22 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:22 np0005535838 systemd[1]: Started libpod-conmon-3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b.scope.
Nov 25 18:32:22 np0005535838 podman[83613]: 2025-11-25 23:32:22.506562001 +0000 UTC m=+0.163631906 container init 59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_agnesi, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:22 np0005535838 podman[83613]: 2025-11-25 23:32:22.51322164 +0000 UTC m=+0.170291555 container start 59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:32:22 np0005535838 podman[83625]: 2025-11-25 23:32:22.423399443 +0000 UTC m=+0.036873270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:22 np0005535838 nifty_agnesi[83641]: 167 167
Nov 25 18:32:22 np0005535838 podman[83613]: 2025-11-25 23:32:22.519286182 +0000 UTC m=+0.176356147 container attach 59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_agnesi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:32:22 np0005535838 podman[83613]: 2025-11-25 23:32:22.519692453 +0000 UTC m=+0.176762378 container died 59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_agnesi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:22 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:22 np0005535838 systemd[1]: libpod-59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794.scope: Deactivated successfully.
Nov 25 18:32:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37ef6e4943db87d2472a137035eabb48dea597a1ceee342439cee8f846145d0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37ef6e4943db87d2472a137035eabb48dea597a1ceee342439cee8f846145d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37ef6e4943db87d2472a137035eabb48dea597a1ceee342439cee8f846145d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:22 np0005535838 podman[83625]: 2025-11-25 23:32:22.551119375 +0000 UTC m=+0.164593152 container init 3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b (image=quay.io/ceph/ceph:v18, name=eager_jennings, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:32:22 np0005535838 systemd[1]: var-lib-containers-storage-overlay-4e1c47754ab309e550aef2018ea675b819e60f29a8fa2d7f91034947ab575329-merged.mount: Deactivated successfully.
Nov 25 18:32:22 np0005535838 podman[83625]: 2025-11-25 23:32:22.563294991 +0000 UTC m=+0.176768778 container start 3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b (image=quay.io/ceph/ceph:v18, name=eager_jennings, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 18:32:22 np0005535838 podman[83625]: 2025-11-25 23:32:22.567523894 +0000 UTC m=+0.180997691 container attach 3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b (image=quay.io/ceph/ceph:v18, name=eager_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:22 np0005535838 podman[83613]: 2025-11-25 23:32:22.589245167 +0000 UTC m=+0.246315092 container remove 59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 18:32:22 np0005535838 systemd[1]: libpod-conmon-59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794.scope: Deactivated successfully.
Nov 25 18:32:22 np0005535838 systemd[1]: Reloading.
Nov 25 18:32:22 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:32:22 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:32:23 np0005535838 systemd[1]: Reloading.
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: Deploying daemon crash.compute-0 on compute-0
Nov 25 18:32:23 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:32:23 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/113567869' entity='client.admin' 
Nov 25 18:32:23 np0005535838 podman[83625]: 2025-11-25 23:32:23.141727332 +0000 UTC m=+0.755201109 container died 3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b (image=quay.io/ceph/ceph:v18, name=eager_jennings, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:23 np0005535838 systemd[1]: libpod-3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b.scope: Deactivated successfully.
Nov 25 18:32:23 np0005535838 systemd[1]: var-lib-containers-storage-overlay-a37ef6e4943db87d2472a137035eabb48dea597a1ceee342439cee8f846145d0-merged.mount: Deactivated successfully.
Nov 25 18:32:23 np0005535838 podman[83625]: 2025-11-25 23:32:23.29314279 +0000 UTC m=+0.906616537 container remove 3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b (image=quay.io/ceph/ceph:v18, name=eager_jennings, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 18:32:23 np0005535838 systemd[1]: Starting Ceph crash.compute-0 for 101922db-575f-58e2-980f-928050464f69...
Nov 25 18:32:23 np0005535838 systemd[1]: libpod-conmon-3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b.scope: Deactivated successfully.
Nov 25 18:32:23 np0005535838 podman[83850]: 2025-11-25 23:32:23.632872393 +0000 UTC m=+0.075466903 container create 42d7403704ba2cd3e1da4f13821251ee83623d7ba3755973192c5a74b5ccbae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:32:23 np0005535838 podman[83850]: 2025-11-25 23:32:23.595863242 +0000 UTC m=+0.038457832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:23 np0005535838 python3[83863]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:32:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/828cfc035f3321e5f3babfcc650e4dbb0913a988a56b75f575c0c560162df02a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/828cfc035f3321e5f3babfcc650e4dbb0913a988a56b75f575c0c560162df02a/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/828cfc035f3321e5f3babfcc650e4dbb0913a988a56b75f575c0c560162df02a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/828cfc035f3321e5f3babfcc650e4dbb0913a988a56b75f575c0c560162df02a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:23 np0005535838 podman[83850]: 2025-11-25 23:32:23.740543919 +0000 UTC m=+0.183138459 container init 42d7403704ba2cd3e1da4f13821251ee83623d7ba3755973192c5a74b5ccbae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 18:32:23 np0005535838 podman[83850]: 2025-11-25 23:32:23.750742083 +0000 UTC m=+0.193336593 container start 42d7403704ba2cd3e1da4f13821251ee83623d7ba3755973192c5a74b5ccbae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:23 np0005535838 bash[83850]: 42d7403704ba2cd3e1da4f13821251ee83623d7ba3755973192c5a74b5ccbae0
Nov 25 18:32:23 np0005535838 systemd[1]: Started Ceph crash.compute-0 for 101922db-575f-58e2-980f-928050464f69.
Nov 25 18:32:23 np0005535838 podman[83873]: 2025-11-25 23:32:23.799762066 +0000 UTC m=+0.072032662 container create f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33 (image=quay.io/ceph/ceph:v18, name=hopeful_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:23 np0005535838 ceph-mgr[75954]: [progress INFO root] complete: finished ev 01d37128-2f77-42ad-b9c7-ed25c454262f (Updating crash deployment (+1 -> 1))
Nov 25 18:32:23 np0005535838 ceph-mgr[75954]: [progress INFO root] Completed event 01d37128-2f77-42ad-b9c7-ed25c454262f (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:23 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev a2283866-e30d-4884-a246-0fc4ab05bcc8 does not exist
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:23 np0005535838 ceph-mgr[75954]: [progress INFO root] update: starting ev 5e4e79e8-e0bf-4510-9e37-72aa1cc6fb23 (Updating mgr deployment (+1 -> 2))
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.cckgxa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cckgxa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cckgxa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:32:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:32:23 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.cckgxa on compute-0
Nov 25 18:32:23 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.cckgxa on compute-0
Nov 25 18:32:23 np0005535838 systemd[1]: Started libpod-conmon-f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33.scope.
Nov 25 18:32:23 np0005535838 podman[83873]: 2025-11-25 23:32:23.770546193 +0000 UTC m=+0.042816819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:23 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b09875b329b4241c77e1cb3f020a304ae9cb9abc6ba0c6d9fc1e350f650592/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b09875b329b4241c77e1cb3f020a304ae9cb9abc6ba0c6d9fc1e350f650592/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b09875b329b4241c77e1cb3f020a304ae9cb9abc6ba0c6d9fc1e350f650592/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:23 np0005535838 podman[83873]: 2025-11-25 23:32:23.928842835 +0000 UTC m=+0.201113481 container init f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33 (image=quay.io/ceph/ceph:v18, name=hopeful_kepler, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:23 np0005535838 podman[83873]: 2025-11-25 23:32:23.939408288 +0000 UTC m=+0.211678894 container start f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33 (image=quay.io/ceph/ceph:v18, name=hopeful_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 18:32:23 np0005535838 podman[83873]: 2025-11-25 23:32:23.945066879 +0000 UTC m=+0.217337485 container attach f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33 (image=quay.io/ceph/ceph:v18, name=hopeful_kepler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 18:32:23 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:24 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 25 18:32:24 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/113567869' entity='client.admin' 
Nov 25 18:32:24 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:24 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:24 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:24 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:24 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:24 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cckgxa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 18:32:24 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cckgxa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 18:32:24 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: 2025-11-25T23:32:24.224+0000 7f5ee9164640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 18:32:24 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: 2025-11-25T23:32:24.224+0000 7f5ee9164640 -1 AuthRegistry(0x7f5ee4067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 18:32:24 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: 2025-11-25T23:32:24.225+0000 7f5ee9164640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 18:32:24 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: 2025-11-25T23:32:24.225+0000 7f5ee9164640 -1 AuthRegistry(0x7f5ee9163000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 18:32:24 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: 2025-11-25T23:32:24.227+0000 7f5ee2d76640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 25 18:32:24 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: 2025-11-25T23:32:24.227+0000 7f5ee9164640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 25 18:32:24 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 25 18:32:24 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 25 18:32:24 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 25 18:32:24 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3921811036' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 25 18:32:24 np0005535838 podman[84070]: 2025-11-25 23:32:24.665495105 +0000 UTC m=+0.052700383 container create 7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:32:24 np0005535838 systemd[1]: Started libpod-conmon-7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace.scope.
Nov 25 18:32:24 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:24 np0005535838 podman[84070]: 2025-11-25 23:32:24.642226991 +0000 UTC m=+0.029432299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:24 np0005535838 podman[84070]: 2025-11-25 23:32:24.75151245 +0000 UTC m=+0.138717748 container init 7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 18:32:24 np0005535838 podman[84070]: 2025-11-25 23:32:24.761291092 +0000 UTC m=+0.148496390 container start 7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:24 np0005535838 podman[84070]: 2025-11-25 23:32:24.764817367 +0000 UTC m=+0.152022675 container attach 7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:32:24 np0005535838 nice_beaver[84087]: 167 167
Nov 25 18:32:24 np0005535838 systemd[1]: libpod-7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace.scope: Deactivated successfully.
Nov 25 18:32:24 np0005535838 conmon[84087]: conmon 7f667a74639b8f1d41d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace.scope/container/memory.events
Nov 25 18:32:24 np0005535838 podman[84070]: 2025-11-25 23:32:24.76903764 +0000 UTC m=+0.156242938 container died 7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 18:32:24 np0005535838 systemd[1]: var-lib-containers-storage-overlay-4b3a924fa9680c3adb12ead1f656292db3fbe344244cfa35e3758d713a2cf845-merged.mount: Deactivated successfully.
Nov 25 18:32:24 np0005535838 podman[84070]: 2025-11-25 23:32:24.814303643 +0000 UTC m=+0.201508931 container remove 7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 18:32:24 np0005535838 systemd[1]: libpod-conmon-7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace.scope: Deactivated successfully.
Nov 25 18:32:24 np0005535838 systemd[1]: Reloading.
Nov 25 18:32:24 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:32:24 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:32:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 25 18:32:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 18:32:25 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3921811036' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 25 18:32:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 25 18:32:25 np0005535838 hopeful_kepler[83891]: set require_min_compat_client to mimic
Nov 25 18:32:25 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 25 18:32:25 np0005535838 ceph-mon[75654]: Deploying daemon mgr.compute-0.cckgxa on compute-0
Nov 25 18:32:25 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3921811036' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 25 18:32:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:32:25 np0005535838 systemd[1]: libpod-f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33.scope: Deactivated successfully.
Nov 25 18:32:25 np0005535838 podman[84143]: 2025-11-25 23:32:25.216425129 +0000 UTC m=+0.028655509 container died f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33 (image=quay.io/ceph/ceph:v18, name=hopeful_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 25 18:32:25 np0005535838 systemd[1]: var-lib-containers-storage-overlay-62b09875b329b4241c77e1cb3f020a304ae9cb9abc6ba0c6d9fc1e350f650592-merged.mount: Deactivated successfully.
Nov 25 18:32:25 np0005535838 podman[84143]: 2025-11-25 23:32:25.262451712 +0000 UTC m=+0.074682092 container remove f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33 (image=quay.io/ceph/ceph:v18, name=hopeful_kepler, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:25 np0005535838 systemd[1]: Reloading.
Nov 25 18:32:25 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:32:25 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:32:25 np0005535838 systemd[1]: libpod-conmon-f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33.scope: Deactivated successfully.
Nov 25 18:32:25 np0005535838 systemd[1]: Starting Ceph mgr.compute-0.cckgxa for 101922db-575f-58e2-980f-928050464f69...
Nov 25 18:32:25 np0005535838 podman[84273]: 2025-11-25 23:32:25.916806448 +0000 UTC m=+0.067222972 container create 973e576265bd7df37317eb5f812b06f0d1cd304e493def0cf8d9c3be1c6286f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 18:32:25 np0005535838 python3[84260]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:32:25 np0005535838 podman[84273]: 2025-11-25 23:32:25.882371215 +0000 UTC m=+0.032787799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:25 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:25 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9036c2e2b2fcad3e2e1fae3c0aa618bbace139643c66e554e91b93f8e63bac2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:25 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9036c2e2b2fcad3e2e1fae3c0aa618bbace139643c66e554e91b93f8e63bac2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:25 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9036c2e2b2fcad3e2e1fae3c0aa618bbace139643c66e554e91b93f8e63bac2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:25 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9036c2e2b2fcad3e2e1fae3c0aa618bbace139643c66e554e91b93f8e63bac2e/merged/var/lib/ceph/mgr/ceph-compute-0.cckgxa supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:26 np0005535838 podman[84285]: 2025-11-25 23:32:26.015557194 +0000 UTC m=+0.054175533 container create 4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41 (image=quay.io/ceph/ceph:v18, name=recursing_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:32:26 np0005535838 podman[84273]: 2025-11-25 23:32:26.026584569 +0000 UTC m=+0.177001093 container init 973e576265bd7df37317eb5f812b06f0d1cd304e493def0cf8d9c3be1c6286f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 18:32:26 np0005535838 podman[84273]: 2025-11-25 23:32:26.033870464 +0000 UTC m=+0.184286958 container start 973e576265bd7df37317eb5f812b06f0d1cd304e493def0cf8d9c3be1c6286f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 18:32:26 np0005535838 bash[84273]: 973e576265bd7df37317eb5f812b06f0d1cd304e493def0cf8d9c3be1c6286f0
Nov 25 18:32:26 np0005535838 systemd[1]: Started Ceph mgr.compute-0.cckgxa for 101922db-575f-58e2-980f-928050464f69.
Nov 25 18:32:26 np0005535838 ceph-mgr[75954]: [progress INFO root] Writing back 1 completed events
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 18:32:26 np0005535838 systemd[1]: Started libpod-conmon-4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41.scope.
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:32:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:26 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:32:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:32:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:32:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:32:26 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a89dc3966f656b62328b9b016d5499907a8b1ed135fd91d7e0b0a68ec22165/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:26 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a89dc3966f656b62328b9b016d5499907a8b1ed135fd91d7e0b0a68ec22165/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:26 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a89dc3966f656b62328b9b016d5499907a8b1ed135fd91d7e0b0a68ec22165/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:26 np0005535838 podman[84285]: 2025-11-25 23:32:26.000204742 +0000 UTC m=+0.038823111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:26 np0005535838 ceph-mgr[84304]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 18:32:26 np0005535838 ceph-mgr[84304]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 18:32:26 np0005535838 ceph-mgr[84304]: pidfile_write: ignore empty --pid-file
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:26 np0005535838 ceph-mgr[75954]: [progress INFO root] complete: finished ev 5e4e79e8-e0bf-4510-9e37-72aa1cc6fb23 (Updating mgr deployment (+1 -> 2))
Nov 25 18:32:26 np0005535838 ceph-mgr[75954]: [progress INFO root] Completed event 5e4e79e8-e0bf-4510-9e37-72aa1cc6fb23 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 18:32:26 np0005535838 podman[84285]: 2025-11-25 23:32:26.11466731 +0000 UTC m=+0.153285669 container init 4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41 (image=quay.io/ceph/ceph:v18, name=recursing_gagarin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:26 np0005535838 podman[84285]: 2025-11-25 23:32:26.121710119 +0000 UTC m=+0.160328458 container start 4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41 (image=quay.io/ceph/ceph:v18, name=recursing_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:32:26 np0005535838 podman[84285]: 2025-11-25 23:32:26.12698833 +0000 UTC m=+0.165606689 container attach 4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41 (image=quay.io/ceph/ceph:v18, name=recursing_gagarin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3921811036' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:26 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:26 np0005535838 ceph-mgr[84304]: mgr[py] Loading python module 'alerts'
Nov 25 18:32:26 np0005535838 ceph-mgr[84304]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 18:32:26 np0005535838 ceph-mgr[84304]: mgr[py] Loading python module 'balancer'
Nov 25 18:32:26 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa[84293]: 2025-11-25T23:32:26.502+0000 7f04190af140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 18:32:26 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:32:26 np0005535838 ceph-mgr[84304]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 18:32:26 np0005535838 ceph-mgr[84304]: mgr[py] Loading python module 'cephadm'
Nov 25 18:32:26 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa[84293]: 2025-11-25T23:32:26.751+0000 7f04190af140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 18:32:27 np0005535838 podman[84673]: 2025-11-25 23:32:27.219345383 +0000 UTC m=+0.080461798 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:27 np0005535838 podman[84673]: 2025-11-25 23:32:27.360155815 +0000 UTC m=+0.221254470 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Added host compute-0
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:27 np0005535838 recursing_gagarin[84307]: Added host 'compute-0' with addr '192.168.122.100'
Nov 25 18:32:27 np0005535838 recursing_gagarin[84307]: Scheduled mon update...
Nov 25 18:32:27 np0005535838 recursing_gagarin[84307]: Scheduled mgr update...
Nov 25 18:32:27 np0005535838 recursing_gagarin[84307]: Scheduled osd.default_drive_group update...
Nov 25 18:32:27 np0005535838 systemd[1]: libpod-4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41.scope: Deactivated successfully.
Nov 25 18:32:27 np0005535838 podman[84285]: 2025-11-25 23:32:27.462007135 +0000 UTC m=+1.500625514 container died 4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41 (image=quay.io/ceph/ceph:v18, name=recursing_gagarin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 18:32:27 np0005535838 systemd[1]: var-lib-containers-storage-overlay-d4a89dc3966f656b62328b9b016d5499907a8b1ed135fd91d7e0b0a68ec22165-merged.mount: Deactivated successfully.
Nov 25 18:32:27 np0005535838 podman[84285]: 2025-11-25 23:32:27.539337307 +0000 UTC m=+1.577955656 container remove 4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41 (image=quay.io/ceph/ceph:v18, name=recursing_gagarin, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:32:27 np0005535838 systemd[1]: libpod-conmon-4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41.scope: Deactivated successfully.
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev fa46c673-c5ef-4319-993d-dcf182181f69 does not exist
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 18:32:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: [progress INFO root] update: starting ev 170bf164-f025-4c2d-825a-b7bdf989c95e (Updating mgr deployment (-1 -> 1))
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.cckgxa from compute-0 -- ports [8765]
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.cckgxa from compute-0 -- ports [8765]
Nov 25 18:32:27 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:28 np0005535838 python3[84865]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:32:28 np0005535838 podman[84919]: 2025-11-25 23:32:28.179085411 +0000 UTC m=+0.056115125 container create d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d (image=quay.io/ceph/ceph:v18, name=infallible_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 18:32:28 np0005535838 systemd[1]: Started libpod-conmon-d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d.scope.
Nov 25 18:32:28 np0005535838 podman[84919]: 2025-11-25 23:32:28.158340736 +0000 UTC m=+0.035370490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:28 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:28 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378b53322a6ae95938ed791b4856d7696515d659437a34685e581e2135848b59/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:28 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378b53322a6ae95938ed791b4856d7696515d659437a34685e581e2135848b59/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:28 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378b53322a6ae95938ed791b4856d7696515d659437a34685e581e2135848b59/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:28 np0005535838 podman[84919]: 2025-11-25 23:32:28.287345323 +0000 UTC m=+0.164375087 container init d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d (image=quay.io/ceph/ceph:v18, name=infallible_wozniak, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:28 np0005535838 podman[84919]: 2025-11-25 23:32:28.298494502 +0000 UTC m=+0.175524246 container start d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d (image=quay.io/ceph/ceph:v18, name=infallible_wozniak, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:28 np0005535838 podman[84919]: 2025-11-25 23:32:28.304624146 +0000 UTC m=+0.181653940 container attach d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d (image=quay.io/ceph/ceph:v18, name=infallible_wozniak, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: Added host compute-0
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: Saving service mon spec with placement compute-0
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: Saving service mgr spec with placement compute-0
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: Saving service osd.default_drive_group spec with placement compute-0
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:28 np0005535838 systemd[1]: Stopping Ceph mgr.compute-0.cckgxa for 101922db-575f-58e2-980f-928050464f69...
Nov 25 18:32:28 np0005535838 ceph-mgr[84304]: mgr[py] Loading python module 'crash'
Nov 25 18:32:28 np0005535838 podman[85035]: 2025-11-25 23:32:28.732840641 +0000 UTC m=+0.089188671 container died 973e576265bd7df37317eb5f812b06f0d1cd304e493def0cf8d9c3be1c6286f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 18:32:28 np0005535838 systemd[1]: var-lib-containers-storage-overlay-9036c2e2b2fcad3e2e1fae3c0aa618bbace139643c66e554e91b93f8e63bac2e-merged.mount: Deactivated successfully.
Nov 25 18:32:28 np0005535838 podman[85035]: 2025-11-25 23:32:28.802319212 +0000 UTC m=+0.158667252 container remove 973e576265bd7df37317eb5f812b06f0d1cd304e493def0cf8d9c3be1c6286f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 18:32:28 np0005535838 bash[85035]: ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa
Nov 25 18:32:28 np0005535838 systemd[1]: ceph-101922db-575f-58e2-980f-928050464f69@mgr.compute-0.cckgxa.service: Main process exited, code=exited, status=143/n/a
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 18:32:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3645594730' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 18:32:28 np0005535838 infallible_wozniak[84952]: 
Nov 25 18:32:28 np0005535838 infallible_wozniak[84952]: {"fsid":"101922db-575f-58e2-980f-928050464f69","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":78,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-25T23:31:07.189601+0000","services":{}},"progress_events":{"170bf164-f025-4c2d-825a-b7bdf989c95e":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 25 18:32:28 np0005535838 systemd[1]: libpod-d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d.scope: Deactivated successfully.
Nov 25 18:32:28 np0005535838 podman[84919]: 2025-11-25 23:32:28.925869354 +0000 UTC m=+0.802899088 container died d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d (image=quay.io/ceph/ceph:v18, name=infallible_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:28 np0005535838 systemd[1]: var-lib-containers-storage-overlay-378b53322a6ae95938ed791b4856d7696515d659437a34685e581e2135848b59-merged.mount: Deactivated successfully.
Nov 25 18:32:28 np0005535838 podman[84919]: 2025-11-25 23:32:28.989918609 +0000 UTC m=+0.866948313 container remove d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d (image=quay.io/ceph/ceph:v18, name=infallible_wozniak, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:29 np0005535838 systemd[1]: libpod-conmon-d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d.scope: Deactivated successfully.
Nov 25 18:32:29 np0005535838 systemd[1]: ceph-101922db-575f-58e2-980f-928050464f69@mgr.compute-0.cckgxa.service: Failed with result 'exit-code'.
Nov 25 18:32:29 np0005535838 systemd[1]: Stopped Ceph mgr.compute-0.cckgxa for 101922db-575f-58e2-980f-928050464f69.
Nov 25 18:32:29 np0005535838 systemd[1]: ceph-101922db-575f-58e2-980f-928050464f69@mgr.compute-0.cckgxa.service: Consumed 3.764s CPU time.
Nov 25 18:32:29 np0005535838 systemd[1]: Reloading.
Nov 25 18:32:29 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:32:29 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:32:29 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.cckgxa
Nov 25 18:32:29 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.cckgxa
Nov 25 18:32:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.cckgxa"} v 0) v1
Nov 25 18:32:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.cckgxa"}]: dispatch
Nov 25 18:32:29 np0005535838 ceph-mon[75654]: Removing daemon mgr.compute-0.cckgxa from compute-0 -- ports [8765]
Nov 25 18:32:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.cckgxa"}]': finished
Nov 25 18:32:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 18:32:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:29 np0005535838 ceph-mgr[75954]: [progress INFO root] complete: finished ev 170bf164-f025-4c2d-825a-b7bdf989c95e (Updating mgr deployment (-1 -> 1))
Nov 25 18:32:29 np0005535838 ceph-mgr[75954]: [progress INFO root] Completed event 170bf164-f025-4c2d-825a-b7bdf989c95e (Updating mgr deployment (-1 -> 1)) in 2 seconds
Nov 25 18:32:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 18:32:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:29 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 407603af-93ec-47c3-9c69-317c36f86d49 does not exist
Nov 25 18:32:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:32:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:32:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:32:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:32:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:32:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:32:29 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:32:30 np0005535838 podman[85286]: 2025-11-25 23:32:30.211330701 +0000 UTC m=+0.058052267 container create c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 18:32:30 np0005535838 systemd[1]: Started libpod-conmon-c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a.scope.
Nov 25 18:32:30 np0005535838 podman[85286]: 2025-11-25 23:32:30.190895883 +0000 UTC m=+0.037617479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:30 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:30 np0005535838 podman[85286]: 2025-11-25 23:32:30.326880867 +0000 UTC m=+0.173602483 container init c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 18:32:30 np0005535838 podman[85286]: 2025-11-25 23:32:30.342536156 +0000 UTC m=+0.189257722 container start c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 18:32:30 np0005535838 podman[85286]: 2025-11-25 23:32:30.347224763 +0000 UTC m=+0.193946329 container attach c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:32:30 np0005535838 lucid_khorana[85302]: 167 167
Nov 25 18:32:30 np0005535838 systemd[1]: libpod-c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a.scope: Deactivated successfully.
Nov 25 18:32:30 np0005535838 podman[85286]: 2025-11-25 23:32:30.349643578 +0000 UTC m=+0.196365124 container died c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:32:30 np0005535838 systemd[1]: var-lib-containers-storage-overlay-3900020c5d11de5a42b34ad5e97ee6dd0afd9a89c5538d8ba451b29c8fe2377a-merged.mount: Deactivated successfully.
Nov 25 18:32:30 np0005535838 podman[85286]: 2025-11-25 23:32:30.395569168 +0000 UTC m=+0.242290734 container remove c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:30 np0005535838 ceph-mon[75654]: Removing key for mgr.compute-0.cckgxa
Nov 25 18:32:30 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.cckgxa"}]: dispatch
Nov 25 18:32:30 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.cckgxa"}]': finished
Nov 25 18:32:30 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:30 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:30 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:32:30 np0005535838 systemd[1]: libpod-conmon-c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a.scope: Deactivated successfully.
Nov 25 18:32:30 np0005535838 podman[85326]: 2025-11-25 23:32:30.568851162 +0000 UTC m=+0.035190444 container create 1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mirzakhani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:32:30 np0005535838 systemd[1]: Started libpod-conmon-1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018.scope.
Nov 25 18:32:30 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:30 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa9bf557799dc3b620078240d18c033263b1c0b38534363bf9c83cb1a9df318/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:30 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa9bf557799dc3b620078240d18c033263b1c0b38534363bf9c83cb1a9df318/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:30 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa9bf557799dc3b620078240d18c033263b1c0b38534363bf9c83cb1a9df318/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:30 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa9bf557799dc3b620078240d18c033263b1c0b38534363bf9c83cb1a9df318/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:30 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa9bf557799dc3b620078240d18c033263b1c0b38534363bf9c83cb1a9df318/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:30 np0005535838 podman[85326]: 2025-11-25 23:32:30.554740213 +0000 UTC m=+0.021079515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:30 np0005535838 podman[85326]: 2025-11-25 23:32:30.653751737 +0000 UTC m=+0.120091069 container init 1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:30 np0005535838 podman[85326]: 2025-11-25 23:32:30.66694647 +0000 UTC m=+0.133285762 container start 1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mirzakhani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:30 np0005535838 podman[85326]: 2025-11-25 23:32:30.670822694 +0000 UTC m=+0.137162016 container attach 1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:32:31 np0005535838 ceph-mgr[75954]: [progress INFO root] Writing back 3 completed events
Nov 25 18:32:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 18:32:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:31 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:31 np0005535838 keen_mirzakhani[85343]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:32:31 np0005535838 keen_mirzakhani[85343]: --> relative data size: 1.0
Nov 25 18:32:31 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 18:32:31 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c5ddf08a-6193-41e0-8332-60b5083aa62e
Nov 25 18:32:31 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e"} v 0) v1
Nov 25 18:32:32 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1583645874' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e"}]: dispatch
Nov 25 18:32:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 25 18:32:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 18:32:32 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1583645874' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e"}]': finished
Nov 25 18:32:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 25 18:32:32 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 25 18:32:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 18:32:32 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 18:32:32 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 18:32:32 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/1583645874' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e"}]: dispatch
Nov 25 18:32:32 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/1583645874' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e"}]': finished
Nov 25 18:32:32 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 18:32:32 np0005535838 lvm[85406]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 18:32:32 np0005535838 lvm[85406]: VG ceph_vg0 finished
Nov 25 18:32:32 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 25 18:32:32 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 25 18:32:32 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 18:32:32 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 18:32:32 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 25 18:32:33 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 18:32:33 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174327017' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 18:32:33 np0005535838 keen_mirzakhani[85343]: stderr: got monmap epoch 1
Nov 25 18:32:33 np0005535838 keen_mirzakhani[85343]: --> Creating keyring file for osd.0
Nov 25 18:32:33 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 25 18:32:33 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 25 18:32:33 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid c5ddf08a-6193-41e0-8332-60b5083aa62e --setuser ceph --setgroup ceph
Nov 25 18:32:33 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:34 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 25 18:32:34 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 18:32:34 np0005535838 ceph-mon[75654]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 25 18:32:34 np0005535838 ceph-mon[75654]: Cluster is now healthy
Nov 25 18:32:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: stderr: 2025-11-25T23:32:33.109+0000 7fdf9b97a740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: stderr: 2025-11-25T23:32:33.109+0000 7fdf9b97a740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: stderr: 2025-11-25T23:32:33.109+0000 7fdf9b97a740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: stderr: 2025-11-25T23:32:33.109+0000 7fdf9b97a740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 18:32:35 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 21bbab34-bea3-466b-8bf7-812749fcef47
Nov 25 18:32:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "21bbab34-bea3-466b-8bf7-812749fcef47"} v 0) v1
Nov 25 18:32:35 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1437777547' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "21bbab34-bea3-466b-8bf7-812749fcef47"}]: dispatch
Nov 25 18:32:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 25 18:32:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 18:32:35 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1437777547' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "21bbab34-bea3-466b-8bf7-812749fcef47"}]': finished
Nov 25 18:32:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 25 18:32:35 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 25 18:32:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 18:32:35 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 18:32:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 18:32:35 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 18:32:35 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 18:32:35 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 18:32:35 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:36 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/1437777547' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "21bbab34-bea3-466b-8bf7-812749fcef47"}]: dispatch
Nov 25 18:32:36 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/1437777547' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "21bbab34-bea3-466b-8bf7-812749fcef47"}]': finished
Nov 25 18:32:36 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 18:32:36 np0005535838 lvm[86361]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 18:32:36 np0005535838 lvm[86361]: VG ceph_vg1 finished
Nov 25 18:32:36 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 25 18:32:36 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 25 18:32:36 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 18:32:36 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 18:32:36 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 25 18:32:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 18:32:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1149492936' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 18:32:36 np0005535838 keen_mirzakhani[85343]: stderr: got monmap epoch 1
Nov 25 18:32:36 np0005535838 keen_mirzakhani[85343]: --> Creating keyring file for osd.1
Nov 25 18:32:36 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 25 18:32:36 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 25 18:32:36 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 21bbab34-bea3-466b-8bf7-812749fcef47 --setuser ceph --setgroup ceph
Nov 25 18:32:37 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: stderr: 2025-11-25T23:32:36.801+0000 7fcfcaae7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: stderr: 2025-11-25T23:32:36.802+0000 7fcfcaae7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: stderr: 2025-11-25T23:32:36.802+0000 7fcfcaae7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: stderr: 2025-11-25T23:32:36.802+0000 7fcfcaae7740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 18:32:39 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 019d967b-1a56-4e90-8682-a890da577e20
Nov 25 18:32:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "019d967b-1a56-4e90-8682-a890da577e20"} v 0) v1
Nov 25 18:32:39 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/144360762' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "019d967b-1a56-4e90-8682-a890da577e20"}]: dispatch
Nov 25 18:32:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 25 18:32:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 18:32:39 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/144360762' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "019d967b-1a56-4e90-8682-a890da577e20"}]': finished
Nov 25 18:32:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 25 18:32:39 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 25 18:32:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 18:32:39 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 18:32:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 18:32:39 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 18:32:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:32:39 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 18:32:39 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:32:39 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 18:32:39 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:32:39 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:40 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 18:32:40 np0005535838 lvm[87311]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 18:32:40 np0005535838 lvm[87311]: VG ceph_vg2 finished
Nov 25 18:32:40 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/144360762' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "019d967b-1a56-4e90-8682-a890da577e20"}]: dispatch
Nov 25 18:32:40 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/144360762' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "019d967b-1a56-4e90-8682-a890da577e20"}]': finished
Nov 25 18:32:40 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 25 18:32:40 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 25 18:32:40 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 18:32:40 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 18:32:40 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 25 18:32:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:32:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 18:32:40 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239912430' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 18:32:40 np0005535838 keen_mirzakhani[85343]: stderr: got monmap epoch 1
Nov 25 18:32:40 np0005535838 keen_mirzakhani[85343]: --> Creating keyring file for osd.2
Nov 25 18:32:40 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 25 18:32:40 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 25 18:32:40 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 019d967b-1a56-4e90-8682-a890da577e20 --setuser ceph --setgroup ceph
Nov 25 18:32:41 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:42 np0005535838 keen_mirzakhani[85343]: stderr: 2025-11-25T23:32:40.612+0000 7f3d7028a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 18:32:42 np0005535838 keen_mirzakhani[85343]: stderr: 2025-11-25T23:32:40.612+0000 7f3d7028a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 18:32:42 np0005535838 keen_mirzakhani[85343]: stderr: 2025-11-25T23:32:40.612+0000 7f3d7028a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 18:32:42 np0005535838 keen_mirzakhani[85343]: stderr: 2025-11-25T23:32:40.612+0000 7f3d7028a740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 25 18:32:42 np0005535838 keen_mirzakhani[85343]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 25 18:32:42 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 18:32:42 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 25 18:32:42 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 18:32:42 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 25 18:32:42 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 18:32:42 np0005535838 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 18:32:43 np0005535838 keen_mirzakhani[85343]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 25 18:32:43 np0005535838 keen_mirzakhani[85343]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 25 18:32:43 np0005535838 systemd[1]: libpod-1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018.scope: Deactivated successfully.
Nov 25 18:32:43 np0005535838 systemd[1]: libpod-1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018.scope: Consumed 6.786s CPU time.
Nov 25 18:32:43 np0005535838 podman[88232]: 2025-11-25 23:32:43.125300304 +0000 UTC m=+0.044802311 container died 1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:43 np0005535838 systemd[1]: var-lib-containers-storage-overlay-7aa9bf557799dc3b620078240d18c033263b1c0b38534363bf9c83cb1a9df318-merged.mount: Deactivated successfully.
Nov 25 18:32:43 np0005535838 podman[88232]: 2025-11-25 23:32:43.211375641 +0000 UTC m=+0.130877588 container remove 1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 18:32:43 np0005535838 systemd[1]: libpod-conmon-1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018.scope: Deactivated successfully.
Nov 25 18:32:43 np0005535838 podman[88389]: 2025-11-25 23:32:43.95510799 +0000 UTC m=+0.053617361 container create b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 18:32:43 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:44 np0005535838 systemd[1]: Started libpod-conmon-b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e.scope.
Nov 25 18:32:44 np0005535838 podman[88389]: 2025-11-25 23:32:43.928462125 +0000 UTC m=+0.026971536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:44 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:44 np0005535838 podman[88389]: 2025-11-25 23:32:44.06470166 +0000 UTC m=+0.163211021 container init b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tu, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:44 np0005535838 podman[88389]: 2025-11-25 23:32:44.077229077 +0000 UTC m=+0.175738438 container start b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 18:32:44 np0005535838 podman[88389]: 2025-11-25 23:32:44.082147095 +0000 UTC m=+0.180656506 container attach b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:32:44 np0005535838 elated_tu[88405]: 167 167
Nov 25 18:32:44 np0005535838 systemd[1]: libpod-b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e.scope: Deactivated successfully.
Nov 25 18:32:44 np0005535838 podman[88389]: 2025-11-25 23:32:44.084660051 +0000 UTC m=+0.183169422 container died b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 18:32:44 np0005535838 systemd[1]: var-lib-containers-storage-overlay-2aa2bff6fe618b7a163581cf2cacefe80c91441756792816dbbb3f1ccf5ef59f-merged.mount: Deactivated successfully.
Nov 25 18:32:44 np0005535838 podman[88389]: 2025-11-25 23:32:44.129353197 +0000 UTC m=+0.227862558 container remove b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tu, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:32:44 np0005535838 systemd[1]: libpod-conmon-b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e.scope: Deactivated successfully.
Nov 25 18:32:44 np0005535838 podman[88431]: 2025-11-25 23:32:44.354881993 +0000 UTC m=+0.053016725 container create 83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 18:32:44 np0005535838 systemd[1]: Started libpod-conmon-83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8.scope.
Nov 25 18:32:44 np0005535838 podman[88431]: 2025-11-25 23:32:44.333464004 +0000 UTC m=+0.031598816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:44 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:44 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e451f99bddd449245975cb3159e436e42477164b18600181bdb8f7a401b73cd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:44 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e451f99bddd449245975cb3159e436e42477164b18600181bdb8f7a401b73cd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:44 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e451f99bddd449245975cb3159e436e42477164b18600181bdb8f7a401b73cd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:44 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e451f99bddd449245975cb3159e436e42477164b18600181bdb8f7a401b73cd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:44 np0005535838 podman[88431]: 2025-11-25 23:32:44.464757771 +0000 UTC m=+0.162892573 container init 83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 18:32:44 np0005535838 podman[88431]: 2025-11-25 23:32:44.475285155 +0000 UTC m=+0.173419897 container start 83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 18:32:44 np0005535838 podman[88431]: 2025-11-25 23:32:44.478814848 +0000 UTC m=+0.176949660 container attach 83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]: {
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:    "0": [
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:        {
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "devices": [
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "/dev/loop3"
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            ],
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_name": "ceph_lv0",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_size": "21470642176",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "name": "ceph_lv0",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "tags": {
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.cluster_name": "ceph",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.crush_device_class": "",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.encrypted": "0",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.osd_id": "0",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.type": "block",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.vdo": "0"
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            },
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "type": "block",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "vg_name": "ceph_vg0"
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:        }
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:    ],
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:    "1": [
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:        {
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "devices": [
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "/dev/loop4"
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            ],
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_name": "ceph_lv1",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_size": "21470642176",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "name": "ceph_lv1",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "tags": {
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.cluster_name": "ceph",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.crush_device_class": "",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.encrypted": "0",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.osd_id": "1",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.type": "block",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.vdo": "0"
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            },
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "type": "block",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "vg_name": "ceph_vg1"
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:        }
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:    ],
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:    "2": [
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:        {
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "devices": [
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "/dev/loop5"
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            ],
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_name": "ceph_lv2",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_size": "21470642176",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "name": "ceph_lv2",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "tags": {
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.cluster_name": "ceph",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.crush_device_class": "",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.encrypted": "0",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.osd_id": "2",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.type": "block",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:                "ceph.vdo": "0"
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            },
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "type": "block",
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:            "vg_name": "ceph_vg2"
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:        }
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]:    ]
Nov 25 18:32:45 np0005535838 zealous_mclaren[88447]: }
Nov 25 18:32:45 np0005535838 systemd[1]: libpod-83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8.scope: Deactivated successfully.
Nov 25 18:32:45 np0005535838 podman[88431]: 2025-11-25 23:32:45.256637737 +0000 UTC m=+0.954772489 container died 83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:45 np0005535838 systemd[1]: var-lib-containers-storage-overlay-e451f99bddd449245975cb3159e436e42477164b18600181bdb8f7a401b73cd1-merged.mount: Deactivated successfully.
Nov 25 18:32:45 np0005535838 podman[88431]: 2025-11-25 23:32:45.320136315 +0000 UTC m=+1.018271077 container remove 83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:32:45 np0005535838 systemd[1]: libpod-conmon-83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8.scope: Deactivated successfully.
Nov 25 18:32:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 25 18:32:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 18:32:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:32:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:32:45 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 25 18:32:45 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 25 18:32:45 np0005535838 podman[88606]: 2025-11-25 23:32:45.981798212 +0000 UTC m=+0.037364335 container create d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:45 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:46 np0005535838 systemd[1]: Started libpod-conmon-d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54.scope.
Nov 25 18:32:46 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:46 np0005535838 podman[88606]: 2025-11-25 23:32:45.965856117 +0000 UTC m=+0.021422270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:46 np0005535838 podman[88606]: 2025-11-25 23:32:46.071810112 +0000 UTC m=+0.127376295 container init d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:46 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 18:32:46 np0005535838 ceph-mon[75654]: Deploying daemon osd.0 on compute-0
Nov 25 18:32:46 np0005535838 podman[88606]: 2025-11-25 23:32:46.079629556 +0000 UTC m=+0.135195689 container start d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 18:32:46 np0005535838 quirky_wozniak[88623]: 167 167
Nov 25 18:32:46 np0005535838 systemd[1]: libpod-d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54.scope: Deactivated successfully.
Nov 25 18:32:46 np0005535838 podman[88606]: 2025-11-25 23:32:46.087282296 +0000 UTC m=+0.142848499 container attach d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:46 np0005535838 podman[88606]: 2025-11-25 23:32:46.0882436 +0000 UTC m=+0.143809733 container died d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 18:32:46 np0005535838 systemd[1]: var-lib-containers-storage-overlay-00d8399ef089d1089dc0351ff81858bd20bfefd939194bd58a7c3c355a7bbded-merged.mount: Deactivated successfully.
Nov 25 18:32:46 np0005535838 podman[88606]: 2025-11-25 23:32:46.119129516 +0000 UTC m=+0.174695669 container remove d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:32:46 np0005535838 systemd[1]: libpod-conmon-d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54.scope: Deactivated successfully.
Nov 25 18:32:46 np0005535838 podman[88653]: 2025-11-25 23:32:46.397129392 +0000 UTC m=+0.041617317 container create e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:32:46 np0005535838 systemd[1]: Started libpod-conmon-e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63.scope.
Nov 25 18:32:46 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:46 np0005535838 podman[88653]: 2025-11-25 23:32:46.380844407 +0000 UTC m=+0.025332352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d581f6958527ef0f3faa6aa902100ffd99408105a90a5d24e2554e455a286734/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d581f6958527ef0f3faa6aa902100ffd99408105a90a5d24e2554e455a286734/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d581f6958527ef0f3faa6aa902100ffd99408105a90a5d24e2554e455a286734/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d581f6958527ef0f3faa6aa902100ffd99408105a90a5d24e2554e455a286734/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d581f6958527ef0f3faa6aa902100ffd99408105a90a5d24e2554e455a286734/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:46 np0005535838 podman[88653]: 2025-11-25 23:32:46.505531711 +0000 UTC m=+0.150019726 container init e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:32:46 np0005535838 podman[88653]: 2025-11-25 23:32:46.517112934 +0000 UTC m=+0.161600859 container start e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 18:32:46 np0005535838 podman[88653]: 2025-11-25 23:32:46.520337908 +0000 UTC m=+0.164825923 container attach e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:32:47 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test[88669]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 18:32:47 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test[88669]:                            [--no-systemd] [--no-tmpfs]
Nov 25 18:32:47 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test[88669]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 18:32:47 np0005535838 systemd[1]: libpod-e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63.scope: Deactivated successfully.
Nov 25 18:32:47 np0005535838 podman[88653]: 2025-11-25 23:32:47.140980625 +0000 UTC m=+0.785468550 container died e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:47 np0005535838 systemd[1]: var-lib-containers-storage-overlay-d581f6958527ef0f3faa6aa902100ffd99408105a90a5d24e2554e455a286734-merged.mount: Deactivated successfully.
Nov 25 18:32:47 np0005535838 podman[88653]: 2025-11-25 23:32:47.205712334 +0000 UTC m=+0.850200269 container remove e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 18:32:47 np0005535838 systemd[1]: libpod-conmon-e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63.scope: Deactivated successfully.
Nov 25 18:32:47 np0005535838 systemd[1]: Reloading.
Nov 25 18:32:47 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:32:47 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:32:47 np0005535838 systemd[1]: Reloading.
Nov 25 18:32:47 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:32:47 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:32:47 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:48 np0005535838 systemd[1]: Starting Ceph osd.0 for 101922db-575f-58e2-980f-928050464f69...
Nov 25 18:32:48 np0005535838 podman[88828]: 2025-11-25 23:32:48.325275863 +0000 UTC m=+0.072776161 container create 65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:48 np0005535838 podman[88828]: 2025-11-25 23:32:48.292923818 +0000 UTC m=+0.040424186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:48 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f641f7e61b429d2ac620d8812cf379747d74b07245263739f063a42a398e8c9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f641f7e61b429d2ac620d8812cf379747d74b07245263739f063a42a398e8c9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f641f7e61b429d2ac620d8812cf379747d74b07245263739f063a42a398e8c9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f641f7e61b429d2ac620d8812cf379747d74b07245263739f063a42a398e8c9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f641f7e61b429d2ac620d8812cf379747d74b07245263739f063a42a398e8c9f/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:48 np0005535838 podman[88828]: 2025-11-25 23:32:48.42481801 +0000 UTC m=+0.172318328 container init 65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:48 np0005535838 podman[88828]: 2025-11-25 23:32:48.437496881 +0000 UTC m=+0.184997199 container start 65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 18:32:48 np0005535838 podman[88828]: 2025-11-25 23:32:48.44281994 +0000 UTC m=+0.190320258 container attach 65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:49 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate[88844]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 18:32:49 np0005535838 bash[88828]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 18:32:49 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate[88844]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 18:32:49 np0005535838 bash[88828]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 18:32:49 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate[88844]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 18:32:49 np0005535838 bash[88828]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 18:32:49 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate[88844]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 18:32:49 np0005535838 bash[88828]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 18:32:49 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate[88844]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 18:32:49 np0005535838 bash[88828]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 18:32:49 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate[88844]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 18:32:49 np0005535838 bash[88828]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 18:32:49 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate[88844]: --> ceph-volume raw activate successful for osd ID: 0
Nov 25 18:32:49 np0005535838 bash[88828]: --> ceph-volume raw activate successful for osd ID: 0
Nov 25 18:32:49 np0005535838 systemd[1]: libpod-65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b.scope: Deactivated successfully.
Nov 25 18:32:49 np0005535838 systemd[1]: libpod-65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b.scope: Consumed 1.147s CPU time.
Nov 25 18:32:49 np0005535838 podman[88828]: 2025-11-25 23:32:49.564909844 +0000 UTC m=+1.312410132 container died 65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 18:32:49 np0005535838 systemd[1]: var-lib-containers-storage-overlay-f641f7e61b429d2ac620d8812cf379747d74b07245263739f063a42a398e8c9f-merged.mount: Deactivated successfully.
Nov 25 18:32:49 np0005535838 podman[88828]: 2025-11-25 23:32:49.647988743 +0000 UTC m=+1.395489041 container remove 65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 18:32:49 np0005535838 podman[89024]: 2025-11-25 23:32:49.936645866 +0000 UTC m=+0.047317616 container create 1cdf379c2ca7941793d745665193cd2db26279d7acee8d9c89085655245b879f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:49 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32122fa7e0b7886ecfed190746990384b8733bfebb6f647677b35f2503f7c4fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32122fa7e0b7886ecfed190746990384b8733bfebb6f647677b35f2503f7c4fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32122fa7e0b7886ecfed190746990384b8733bfebb6f647677b35f2503f7c4fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32122fa7e0b7886ecfed190746990384b8733bfebb6f647677b35f2503f7c4fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32122fa7e0b7886ecfed190746990384b8733bfebb6f647677b35f2503f7c4fc/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:50 np0005535838 podman[89024]: 2025-11-25 23:32:50.007504065 +0000 UTC m=+0.118175885 container init 1cdf379c2ca7941793d745665193cd2db26279d7acee8d9c89085655245b879f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:50 np0005535838 podman[89024]: 2025-11-25 23:32:49.91684749 +0000 UTC m=+0.027519290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:50 np0005535838 podman[89024]: 2025-11-25 23:32:50.015583836 +0000 UTC m=+0.126255616 container start 1cdf379c2ca7941793d745665193cd2db26279d7acee8d9c89085655245b879f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 18:32:50 np0005535838 bash[89024]: 1cdf379c2ca7941793d745665193cd2db26279d7acee8d9c89085655245b879f
Nov 25 18:32:50 np0005535838 systemd[1]: Started Ceph osd.0 for 101922db-575f-58e2-980f-928050464f69.
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: pidfile_write: ignore empty --pid-file
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4eb9f5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4eb9f5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4eb9f5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4eb9f5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec837800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec837800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec837800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec837800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec837800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 18:32:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:32:50 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:32:50 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 25 18:32:50 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 18:32:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:32:50 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:32:50 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 25 18:32:50 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 25 18:32:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4eb9f5800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: load: jerasure load: lrc 
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 18:32:50 np0005535838 podman[89204]: 2025-11-25 23:32:50.767049808 +0000 UTC m=+0.047426989 container create 06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:50 np0005535838 systemd[1]: Started libpod-conmon-06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9.scope.
Nov 25 18:32:50 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:50 np0005535838 podman[89204]: 2025-11-25 23:32:50.74759047 +0000 UTC m=+0.027967681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:50 np0005535838 podman[89204]: 2025-11-25 23:32:50.849644344 +0000 UTC m=+0.130021555 container init 06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:50 np0005535838 podman[89204]: 2025-11-25 23:32:50.856369779 +0000 UTC m=+0.136746950 container start 06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:50 np0005535838 podman[89204]: 2025-11-25 23:32:50.859156382 +0000 UTC m=+0.139533603 container attach 06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:50 np0005535838 agitated_williamson[89221]: 167 167
Nov 25 18:32:50 np0005535838 systemd[1]: libpod-06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9.scope: Deactivated successfully.
Nov 25 18:32:50 np0005535838 podman[89204]: 2025-11-25 23:32:50.862093159 +0000 UTC m=+0.142470330 container died 06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 18:32:50 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 18:32:50 np0005535838 systemd[1]: var-lib-containers-storage-overlay-93d25b43b1a60a75f529204b5979fa45d92686e6a956b8ad6443a54f389bd7e1-merged.mount: Deactivated successfully.
Nov 25 18:32:50 np0005535838 podman[89204]: 2025-11-25 23:32:50.899118005 +0000 UTC m=+0.179495176 container remove 06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:32:50 np0005535838 systemd[1]: libpod-conmon-06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9.scope: Deactivated successfully.
Nov 25 18:32:51 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:51 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:51 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 18:32:51 np0005535838 ceph-mon[75654]: Deploying daemon osd.1 on compute-0
Nov 25 18:32:51 np0005535838 podman[89258]: 2025-11-25 23:32:51.122232888 +0000 UTC m=+0.036587267 container create 98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluefs mount
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluefs mount shared_bdev_used = 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: RocksDB version: 7.9.2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Git sha 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: DB SUMMARY
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: DB Session ID:  QLA8XFNANC8L6IOXF55E
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: CURRENT file:  CURRENT
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                         Options.error_if_exists: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.create_if_missing: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                                     Options.env: 0x55a4ec889d50
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                                Options.info_log: 0x55a4eba7c7e0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                              Options.statistics: (nil)
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.use_fsync: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                              Options.db_log_dir: 
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.write_buffer_manager: 0x55a4ec992460
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 18:32:51 np0005535838 systemd[1]: Started libpod-conmon-98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197.scope.
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.unordered_write: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.row_cache: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                              Options.wal_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.two_write_queues: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.wal_compression: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.atomic_flush: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.max_background_jobs: 4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.max_background_compactions: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.max_subcompactions: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.max_open_files: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Compression algorithms supported:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kZSTD supported: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kXpressCompression supported: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kZlibCompression supported: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba691f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba691f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba691f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba691f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba691f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857f4e6a8c699251961491629bfd1d38b5f878bb2f9389e34cb114e7641e1b4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:51 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857f4e6a8c699251961491629bfd1d38b5f878bb2f9389e34cb114e7641e1b4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:51 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857f4e6a8c699251961491629bfd1d38b5f878bb2f9389e34cb114e7641e1b4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:51 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857f4e6a8c699251961491629bfd1d38b5f878bb2f9389e34cb114e7641e1b4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:51 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857f4e6a8c699251961491629bfd1d38b5f878bb2f9389e34cb114e7641e1b4a/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba691f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba691f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba69090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 podman[89258]: 2025-11-25 23:32:51.20164972 +0000 UTC m=+0.116004109 container init 98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 18:32:51 np0005535838 podman[89258]: 2025-11-25 23:32:51.106318952 +0000 UTC m=+0.020673341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba69090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba69090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3bce52ae-013e-4dc3-b6d7-f1899aea7616
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113571169131, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113571169315, "job": 1, "event": "recovery_finished"}
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: freelist init
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: freelist _read_cfg
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluefs umount
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 18:32:51 np0005535838 podman[89258]: 2025-11-25 23:32:51.212119424 +0000 UTC m=+0.126473793 container start 98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 18:32:51 np0005535838 podman[89258]: 2025-11-25 23:32:51.215156793 +0000 UTC m=+0.129511152 container attach 98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluefs mount
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluefs mount shared_bdev_used = 4718592
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: RocksDB version: 7.9.2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Git sha 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: DB SUMMARY
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: DB Session ID:  QLA8XFNANC8L6IOXF55F
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: CURRENT file:  CURRENT
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                         Options.error_if_exists: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.create_if_missing: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                                     Options.env: 0x55a4eca227e0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                                Options.info_log: 0x55a4ec885a80
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                              Options.statistics: (nil)
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.use_fsync: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                              Options.db_log_dir: 
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.write_buffer_manager: 0x55a4ec9926e0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.unordered_write: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.row_cache: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                              Options.wal_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.two_write_queues: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.wal_compression: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.atomic_flush: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.max_background_jobs: 4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.max_background_compactions: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.max_subcompactions: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.max_open_files: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Compression algorithms supported:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kZSTD supported: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kXpressCompression supported: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kZlibCompression supported: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba691f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba691f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba691f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba691f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba691f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba691f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba691f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba69090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba69090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a4eba69090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3bce52ae-013e-4dc3-b6d7-f1899aea7616
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113571452491, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113571456885, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113571, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3bce52ae-013e-4dc3-b6d7-f1899aea7616", "db_session_id": "QLA8XFNANC8L6IOXF55F", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113571459347, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113571, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3bce52ae-013e-4dc3-b6d7-f1899aea7616", "db_session_id": "QLA8XFNANC8L6IOXF55F", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113571462058, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113571, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3bce52ae-013e-4dc3-b6d7-f1899aea7616", "db_session_id": "QLA8XFNANC8L6IOXF55F", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113571464364, "job": 1, "event": "recovery_finished"}
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a4ebbd6000
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: DB pointer 0x55a4ec97ba00
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 460.80 MB usag
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: _get_class not permitted to load lua
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: _get_class not permitted to load sdk
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: _get_class not permitted to load test_remote_reads
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: osd.0 0 load_pgs
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: osd.0 0 load_pgs opened 0 pgs
Nov 25 18:32:51 np0005535838 ceph-osd[89044]: osd.0 0 log_to_monitors true
Nov 25 18:32:51 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-0[89040]: 2025-11-25T23:32:51.507+0000 7f3007b5f740 -1 osd.0 0 log_to_monitors true
Nov 25 18:32:51 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 25 18:32:51 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 25 18:32:51 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test[89364]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 18:32:51 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test[89364]:                            [--no-systemd] [--no-tmpfs]
Nov 25 18:32:51 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test[89364]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 18:32:51 np0005535838 systemd[1]: libpod-98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197.scope: Deactivated successfully.
Nov 25 18:32:51 np0005535838 podman[89258]: 2025-11-25 23:32:51.809440452 +0000 UTC m=+0.723794821 container died 98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 18:32:51 np0005535838 systemd[1]: var-lib-containers-storage-overlay-857f4e6a8c699251961491629bfd1d38b5f878bb2f9389e34cb114e7641e1b4a-merged.mount: Deactivated successfully.
Nov 25 18:32:51 np0005535838 podman[89258]: 2025-11-25 23:32:51.886262347 +0000 UTC m=+0.800616756 container remove 98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 18:32:51 np0005535838 systemd[1]: libpod-conmon-98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197.scope: Deactivated successfully.
Nov 25 18:32:51 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 18:32:52 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 18:32:52 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:32:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:32:52 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:32:52 np0005535838 systemd[1]: Reloading.
Nov 25 18:32:52 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:32:52 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:32:52 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 18:32:52 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 18:32:52 np0005535838 systemd[1]: Reloading.
Nov 25 18:32:52 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:32:52 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:32:52 np0005535838 systemd[1]: Starting Ceph osd.1 for 101922db-575f-58e2-980f-928050464f69...
Nov 25 18:32:53 np0005535838 podman[89848]: 2025-11-25 23:32:52.998755961 +0000 UTC m=+0.064677778 container create 43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 18:32:53 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:53 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5942c5a27c07409ae2efe8e5d20486cc05504bcbdd829aaed05b262f06c8556f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:53 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5942c5a27c07409ae2efe8e5d20486cc05504bcbdd829aaed05b262f06c8556f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:53 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5942c5a27c07409ae2efe8e5d20486cc05504bcbdd829aaed05b262f06c8556f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:53 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5942c5a27c07409ae2efe8e5d20486cc05504bcbdd829aaed05b262f06c8556f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:53 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5942c5a27c07409ae2efe8e5d20486cc05504bcbdd829aaed05b262f06c8556f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:53 np0005535838 podman[89848]: 2025-11-25 23:32:52.973552484 +0000 UTC m=+0.039474411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:53 np0005535838 podman[89848]: 2025-11-25 23:32:53.078689407 +0000 UTC m=+0.144611244 container init 43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 18:32:53 np0005535838 podman[89848]: 2025-11-25 23:32:53.090738702 +0000 UTC m=+0.156660519 container start 43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 25 18:32:53 np0005535838 podman[89848]: 2025-11-25 23:32:53.096920763 +0000 UTC m=+0.162842580 container attach 43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 25 18:32:53 np0005535838 ceph-osd[89044]: osd.0 0 done with init, starting boot process
Nov 25 18:32:53 np0005535838 ceph-osd[89044]: osd.0 0 start_boot
Nov 25 18:32:53 np0005535838 ceph-osd[89044]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 18:32:53 np0005535838 ceph-osd[89044]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 18:32:53 np0005535838 ceph-osd[89044]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 18:32:53 np0005535838 ceph-osd[89044]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 18:32:53 np0005535838 ceph-osd[89044]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:32:53 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 18:32:53 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 18:32:53 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:32:53 np0005535838 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1106816781; not ready for session (expect reconnect)
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 18:32:53 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 18:32:53 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 18:32:53 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:54 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate[89863]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 18:32:54 np0005535838 bash[89848]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 18:32:54 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate[89863]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 18:32:54 np0005535838 bash[89848]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 18:32:54 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate[89863]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 18:32:54 np0005535838 bash[89848]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 18:32:54 np0005535838 ceph-mon[75654]: from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 18:32:54 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate[89863]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 18:32:54 np0005535838 bash[89848]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 18:32:54 np0005535838 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1106816781; not ready for session (expect reconnect)
Nov 25 18:32:54 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate[89863]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 18:32:54 np0005535838 bash[89848]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 18:32:54 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 18:32:54 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 18:32:54 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 18:32:54 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate[89863]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 18:32:54 np0005535838 bash[89848]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 18:32:54 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate[89863]: --> ceph-volume raw activate successful for osd ID: 1
Nov 25 18:32:54 np0005535838 bash[89848]: --> ceph-volume raw activate successful for osd ID: 1
Nov 25 18:32:54 np0005535838 systemd[1]: libpod-43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313.scope: Deactivated successfully.
Nov 25 18:32:54 np0005535838 systemd[1]: libpod-43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313.scope: Consumed 1.124s CPU time.
Nov 25 18:32:54 np0005535838 conmon[89863]: conmon 43d84d64c2b680d592d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313.scope/container/memory.events
Nov 25 18:32:54 np0005535838 podman[89848]: 2025-11-25 23:32:54.218805052 +0000 UTC m=+1.284726909 container died 43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 18:32:54 np0005535838 systemd[1]: var-lib-containers-storage-overlay-5942c5a27c07409ae2efe8e5d20486cc05504bcbdd829aaed05b262f06c8556f-merged.mount: Deactivated successfully.
Nov 25 18:32:54 np0005535838 podman[89848]: 2025-11-25 23:32:54.298956703 +0000 UTC m=+1.364878560 container remove 43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:54 np0005535838 podman[90034]: 2025-11-25 23:32:54.517718193 +0000 UTC m=+0.046054103 container create 210a65a79e016cf56b9bf57fc9f3a856ecf3e518112f002c5580b96eeb0ff119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:54 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c3c948bbf5dd1d5dba0d61b01af3573cb4bf2211b1d6b182cb9e0579712af9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:54 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c3c948bbf5dd1d5dba0d61b01af3573cb4bf2211b1d6b182cb9e0579712af9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:54 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c3c948bbf5dd1d5dba0d61b01af3573cb4bf2211b1d6b182cb9e0579712af9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:54 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c3c948bbf5dd1d5dba0d61b01af3573cb4bf2211b1d6b182cb9e0579712af9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:54 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c3c948bbf5dd1d5dba0d61b01af3573cb4bf2211b1d6b182cb9e0579712af9b/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:54 np0005535838 podman[90034]: 2025-11-25 23:32:54.493318986 +0000 UTC m=+0.021654906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:54 np0005535838 podman[90034]: 2025-11-25 23:32:54.594795035 +0000 UTC m=+0.123130985 container init 210a65a79e016cf56b9bf57fc9f3a856ecf3e518112f002c5580b96eeb0ff119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 18:32:54 np0005535838 podman[90034]: 2025-11-25 23:32:54.606530291 +0000 UTC m=+0.134866201 container start 210a65a79e016cf56b9bf57fc9f3a856ecf3e518112f002c5580b96eeb0ff119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:54 np0005535838 bash[90034]: 210a65a79e016cf56b9bf57fc9f3a856ecf3e518112f002c5580b96eeb0ff119
Nov 25 18:32:54 np0005535838 systemd[1]: Started Ceph osd.1 for 101922db-575f-58e2-980f-928050464f69.
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: pidfile_write: ignore empty --pid-file
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613e95a3800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613e95a3800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613e95a3800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613e95a3800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613ea3db800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613ea3db800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613ea3db800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613ea3db800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613ea3db800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 18:32:54 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:32:54 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613e95a3800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 18:32:54 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:32:54 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:54 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 25 18:32:54 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 25 18:32:54 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:32:54 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:32:54 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 25 18:32:54 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: load: jerasure load: lrc 
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 18:32:54 np0005535838 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 18:32:55 np0005535838 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1106816781; not ready for session (expect reconnect)
Nov 25 18:32:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 18:32:55 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 18:32:55 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 18:32:55 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:55 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:55 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 25 18:32:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 18:32:55 np0005535838 podman[90220]: 2025-11-25 23:32:55.403371367 +0000 UTC m=+0.043361593 container create e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:55 np0005535838 ceph-osd[89044]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 36.538 iops: 9353.738 elapsed_sec: 0.321
Nov 25 18:32:55 np0005535838 ceph-osd[89044]: log_channel(cluster) log [WRN] : OSD bench result of 9353.738310 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 18:32:55 np0005535838 ceph-osd[89044]: osd.0 0 waiting for initial osdmap
Nov 25 18:32:55 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-0[89040]: 2025-11-25T23:32:55.439+0000 7f3003adf640 -1 osd.0 0 waiting for initial osdmap
Nov 25 18:32:55 np0005535838 ceph-osd[89044]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 25 18:32:55 np0005535838 ceph-osd[89044]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 25 18:32:55 np0005535838 ceph-osd[89044]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 25 18:32:55 np0005535838 ceph-osd[89044]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Nov 25 18:32:55 np0005535838 systemd[1]: Started libpod-conmon-e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7.scope.
Nov 25 18:32:55 np0005535838 ceph-osd[89044]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 18:32:55 np0005535838 ceph-osd[89044]: osd.0 8 set_numa_affinity not setting numa affinity
Nov 25 18:32:55 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-0[89040]: 2025-11-25T23:32:55.460+0000 7f2fff107640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 18:32:55 np0005535838 ceph-osd[89044]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 25 18:32:55 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:55 np0005535838 podman[90220]: 2025-11-25 23:32:55.383481478 +0000 UTC m=+0.023471734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:55 np0005535838 podman[90220]: 2025-11-25 23:32:55.489050882 +0000 UTC m=+0.129041118 container init e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluefs mount
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluefs mount shared_bdev_used = 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 18:32:55 np0005535838 podman[90220]: 2025-11-25 23:32:55.495890651 +0000 UTC m=+0.135880887 container start e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:55 np0005535838 podman[90220]: 2025-11-25 23:32:55.499370782 +0000 UTC m=+0.139361028 container attach e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moser, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 18:32:55 np0005535838 laughing_moser[90236]: 167 167
Nov 25 18:32:55 np0005535838 systemd[1]: libpod-e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7.scope: Deactivated successfully.
Nov 25 18:32:55 np0005535838 podman[90220]: 2025-11-25 23:32:55.502280438 +0000 UTC m=+0.142270684 container died e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moser, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: RocksDB version: 7.9.2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Git sha 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: DB SUMMARY
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: DB Session ID:  5XYP8PC920X025ZAXNWI
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: CURRENT file:  CURRENT
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                         Options.error_if_exists: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.create_if_missing: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                                     Options.env: 0x5613ea42dc70
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                                Options.info_log: 0x5613e962a8a0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                              Options.statistics: (nil)
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.use_fsync: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                              Options.db_log_dir: 
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.write_buffer_manager: 0x5613ea540460
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.unordered_write: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.row_cache: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                              Options.wal_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.two_write_queues: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.wal_compression: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.atomic_flush: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.max_background_jobs: 4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.max_background_compactions: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.max_subcompactions: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.max_open_files: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Compression algorithms supported:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kZSTD supported: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kXpressCompression supported: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kZlibCompression supported: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e96171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e96171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e96171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e96171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e96171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e96171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e96171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e9617090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e9617090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e9617090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 18:32:55 np0005535838 systemd[1]: var-lib-containers-storage-overlay-1f6bcf38e55bac725bd126f9fa0309029912e9d3ffce053e386992e5aa2c96eb-merged.mount: Deactivated successfully.
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a522a4cb-9102-4aa4-a86b-8971b6d4b06b
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113575542603, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113575542806, "job": 1, "event": "recovery_finished"}
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: freelist init
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: freelist _read_cfg
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluefs umount
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 18:32:55 np0005535838 podman[90220]: 2025-11-25 23:32:55.545793804 +0000 UTC m=+0.185784040 container remove e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moser, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:55 np0005535838 systemd[1]: libpod-conmon-e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7.scope: Deactivated successfully.
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluefs mount
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluefs mount shared_bdev_used = 4718592
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: RocksDB version: 7.9.2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Git sha 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: DB SUMMARY
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: DB Session ID:  5XYP8PC920X025ZAXNWJ
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: CURRENT file:  CURRENT
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                         Options.error_if_exists: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.create_if_missing: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                                     Options.env: 0x5613ea5e8460
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                                Options.info_log: 0x5613e962a600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                              Options.statistics: (nil)
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.use_fsync: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                              Options.db_log_dir: 
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.write_buffer_manager: 0x5613ea540460
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.unordered_write: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.row_cache: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                              Options.wal_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.two_write_queues: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.wal_compression: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.atomic_flush: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.max_background_jobs: 4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.max_background_compactions: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.max_subcompactions: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.max_open_files: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Compression algorithms supported:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kZSTD supported: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kXpressCompression supported: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kZlibCompression supported: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e96171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e96171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e96171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e96171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e96171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e96171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e96171f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e9617090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e9617090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 podman[90464]: 2025-11-25 23:32:55.788237251 +0000 UTC m=+0.054604576 container create 25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5613e9617090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a522a4cb-9102-4aa4-a86b-8971b6d4b06b
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113575781136, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113575786014, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113575, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a522a4cb-9102-4aa4-a86b-8971b6d4b06b", "db_session_id": "5XYP8PC920X025ZAXNWJ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113575789027, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113575, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a522a4cb-9102-4aa4-a86b-8971b6d4b06b", "db_session_id": "5XYP8PC920X025ZAXNWJ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113575791846, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113575, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a522a4cb-9102-4aa4-a86b-8971b6d4b06b", "db_session_id": "5XYP8PC920X025ZAXNWJ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113575793533, "job": 1, "event": "recovery_finished"}
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5613e965e000
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: DB pointer 0x5613ea51fa00
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 460.80 MB usag
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: _get_class not permitted to load lua
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: _get_class not permitted to load sdk
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: _get_class not permitted to load test_remote_reads
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: osd.1 0 load_pgs
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: osd.1 0 load_pgs opened 0 pgs
Nov 25 18:32:55 np0005535838 ceph-osd[90055]: osd.1 0 log_to_monitors true
Nov 25 18:32:55 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-1[90051]: 2025-11-25T23:32:55.816+0000 7f5fc9dd1740 -1 osd.1 0 log_to_monitors true
Nov 25 18:32:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 25 18:32:55 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 25 18:32:55 np0005535838 systemd[1]: Started libpod-conmon-25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e.scope.
Nov 25 18:32:55 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:55 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9700ba41fbcc74721d15b52e1822dd919f9e5e3ead4009f25d9917e250eb0fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:55 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9700ba41fbcc74721d15b52e1822dd919f9e5e3ead4009f25d9917e250eb0fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:55 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9700ba41fbcc74721d15b52e1822dd919f9e5e3ead4009f25d9917e250eb0fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:55 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9700ba41fbcc74721d15b52e1822dd919f9e5e3ead4009f25d9917e250eb0fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:55 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9700ba41fbcc74721d15b52e1822dd919f9e5e3ead4009f25d9917e250eb0fe/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:55 np0005535838 podman[90464]: 2025-11-25 23:32:55.764665796 +0000 UTC m=+0.031033201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:55 np0005535838 podman[90464]: 2025-11-25 23:32:55.876044813 +0000 UTC m=+0.142412168 container init 25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:32:55 np0005535838 podman[90464]: 2025-11-25 23:32:55.881482125 +0000 UTC m=+0.147849440 container start 25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 18:32:55 np0005535838 podman[90464]: 2025-11-25 23:32:55.892513143 +0000 UTC m=+0.158880468 container attach 25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 18:32:55 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 18:32:55 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:32:55
Nov 25 18:32:55 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:32:55 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:32:55 np0005535838 ceph-mgr[75954]: [balancer INFO root] No pools available
Nov 25 18:32:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:32:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:32:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:32:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:32:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:32:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:32:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:32:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:32:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:32:56 np0005535838 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1106816781; not ready for session (expect reconnect)
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 18:32:56 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: Deploying daemon osd.2 on compute-0
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: OSD bench result of 9353.738310 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 25 18:32:56 np0005535838 ceph-osd[89044]: osd.0 9 state: booting -> active
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781] boot
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:32:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:32:56 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 18:32:56 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:32:56 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test[90695]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 18:32:56 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test[90695]:                            [--no-systemd] [--no-tmpfs]
Nov 25 18:32:56 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test[90695]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 18:32:56 np0005535838 systemd[1]: libpod-25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e.scope: Deactivated successfully.
Nov 25 18:32:56 np0005535838 podman[90464]: 2025-11-25 23:32:56.491425253 +0000 UTC m=+0.757792578 container died 25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:56 np0005535838 systemd[1]: var-lib-containers-storage-overlay-a9700ba41fbcc74721d15b52e1822dd919f9e5e3ead4009f25d9917e250eb0fe-merged.mount: Deactivated successfully.
Nov 25 18:32:56 np0005535838 podman[90464]: 2025-11-25 23:32:56.547084565 +0000 UTC m=+0.813451910 container remove 25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:32:56 np0005535838 systemd[1]: libpod-conmon-25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e.scope: Deactivated successfully.
Nov 25 18:32:56 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 18:32:56 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 18:32:56 np0005535838 systemd[1]: Reloading.
Nov 25 18:32:56 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:32:56 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:32:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 25 18:32:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 18:32:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 18:32:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 25 18:32:57 np0005535838 ceph-osd[90055]: osd.1 0 done with init, starting boot process
Nov 25 18:32:57 np0005535838 ceph-osd[90055]: osd.1 0 start_boot
Nov 25 18:32:57 np0005535838 ceph-osd[90055]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 18:32:57 np0005535838 ceph-osd[90055]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 18:32:57 np0005535838 ceph-osd[90055]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 18:32:57 np0005535838 ceph-osd[90055]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 18:32:57 np0005535838 ceph-osd[90055]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 25 18:32:57 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 25 18:32:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 18:32:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 18:32:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:32:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:32:57 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 18:32:57 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:32:57 np0005535838 ceph-mon[75654]: from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 25 18:32:57 np0005535838 ceph-mon[75654]: osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781] boot
Nov 25 18:32:57 np0005535838 ceph-mon[75654]: from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 18:32:57 np0005535838 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1453008780; not ready for session (expect reconnect)
Nov 25 18:32:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 18:32:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 18:32:57 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 18:32:57 np0005535838 systemd[1]: Reloading.
Nov 25 18:32:57 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:32:57 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:32:57 np0005535838 systemd[1]: Starting Ceph osd.2 for 101922db-575f-58e2-980f-928050464f69...
Nov 25 18:32:57 np0005535838 podman[90855]: 2025-11-25 23:32:57.789459339 +0000 UTC m=+0.054019571 container create df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 18:32:57 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:57 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e3fd58c3cece4f3cf004d5b549eddd7eb1e2bf4230c4408ae5065ef717c713/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:57 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e3fd58c3cece4f3cf004d5b549eddd7eb1e2bf4230c4408ae5065ef717c713/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:57 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e3fd58c3cece4f3cf004d5b549eddd7eb1e2bf4230c4408ae5065ef717c713/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:57 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e3fd58c3cece4f3cf004d5b549eddd7eb1e2bf4230c4408ae5065ef717c713/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:57 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e3fd58c3cece4f3cf004d5b549eddd7eb1e2bf4230c4408ae5065ef717c713/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:57 np0005535838 podman[90855]: 2025-11-25 23:32:57.774204941 +0000 UTC m=+0.038765263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:57 np0005535838 podman[90855]: 2025-11-25 23:32:57.883440152 +0000 UTC m=+0.148000404 container init df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:32:57 np0005535838 podman[90855]: 2025-11-25 23:32:57.902980252 +0000 UTC m=+0.167540484 container start df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 18:32:57 np0005535838 podman[90855]: 2025-11-25 23:32:57.908445764 +0000 UTC m=+0.173006026 container attach df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:32:57 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 18:32:58 np0005535838 ceph-mgr[75954]: [devicehealth INFO root] creating mgr pool
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 25 18:32:58 np0005535838 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1453008780; not ready for session (expect reconnect)
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 18:32:58 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 25 18:32:58 np0005535838 ceph-osd[89044]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 18:32:58 np0005535838 ceph-osd[89044]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 25 18:32:58 np0005535838 ceph-osd[89044]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 18:32:58 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 18:32:58 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 25 18:32:58 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 25 18:32:59 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate[90869]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 18:32:59 np0005535838 bash[90855]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 18:32:59 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate[90869]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 18:32:59 np0005535838 bash[90855]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 18:32:59 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate[90869]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 18:32:59 np0005535838 bash[90855]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 18:32:59 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate[90869]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 18:32:59 np0005535838 bash[90855]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 18:32:59 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate[90869]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 18:32:59 np0005535838 bash[90855]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 18:32:59 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate[90869]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 18:32:59 np0005535838 bash[90855]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 18:32:59 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate[90869]: --> ceph-volume raw activate successful for osd ID: 2
Nov 25 18:32:59 np0005535838 bash[90855]: --> ceph-volume raw activate successful for osd ID: 2
Nov 25 18:32:59 np0005535838 systemd[1]: libpod-df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f.scope: Deactivated successfully.
Nov 25 18:32:59 np0005535838 systemd[1]: libpod-df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f.scope: Consumed 1.264s CPU time.
Nov 25 18:32:59 np0005535838 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1453008780; not ready for session (expect reconnect)
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 18:32:59 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 25 18:32:59 np0005535838 podman[91014]: 2025-11-25 23:32:59.231545094 +0000 UTC m=+0.049912083 container died df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:32:59 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 18:32:59 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:32:59 np0005535838 systemd[1]: var-lib-containers-storage-overlay-36e3fd58c3cece4f3cf004d5b549eddd7eb1e2bf4230c4408ae5065ef717c713-merged.mount: Deactivated successfully.
Nov 25 18:32:59 np0005535838 podman[91014]: 2025-11-25 23:32:59.306626484 +0000 UTC m=+0.124993413 container remove df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:59 np0005535838 python3[91021]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:32:59 np0005535838 podman[91041]: 2025-11-25 23:32:59.449887713 +0000 UTC m=+0.049589085 container create 93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974 (image=quay.io/ceph/ceph:v18, name=elastic_johnson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:32:59 np0005535838 systemd[1]: Started libpod-conmon-93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974.scope.
Nov 25 18:32:59 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:32:59 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb4d003a71d0e07d19be191d06af2442c0497305bb1e87b30e80cb5fb9abf58/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:59 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb4d003a71d0e07d19be191d06af2442c0497305bb1e87b30e80cb5fb9abf58/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:59 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb4d003a71d0e07d19be191d06af2442c0497305bb1e87b30e80cb5fb9abf58/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:59 np0005535838 podman[91041]: 2025-11-25 23:32:59.430887297 +0000 UTC m=+0.030588689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:32:59 np0005535838 podman[91041]: 2025-11-25 23:32:59.529023088 +0000 UTC m=+0.128724490 container init 93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974 (image=quay.io/ceph/ceph:v18, name=elastic_johnson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:32:59 np0005535838 podman[91041]: 2025-11-25 23:32:59.536970196 +0000 UTC m=+0.136671568 container start 93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974 (image=quay.io/ceph/ceph:v18, name=elastic_johnson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 18:32:59 np0005535838 podman[91041]: 2025-11-25 23:32:59.541620727 +0000 UTC m=+0.141322099 container attach 93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974 (image=quay.io/ceph/ceph:v18, name=elastic_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 18:32:59 np0005535838 podman[91091]: 2025-11-25 23:32:59.553591599 +0000 UTC m=+0.043160477 container create 4adea0c725a0ee7ed4d56b00f30b101c6071ad14e4ab599d343ac613042f9d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 25 18:32:59 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3efc4fd8ff99b7c9a8786efa1fea789779bb133c619b97afae2f5b38590bb86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:59 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3efc4fd8ff99b7c9a8786efa1fea789779bb133c619b97afae2f5b38590bb86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:59 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3efc4fd8ff99b7c9a8786efa1fea789779bb133c619b97afae2f5b38590bb86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:59 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3efc4fd8ff99b7c9a8786efa1fea789779bb133c619b97afae2f5b38590bb86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:59 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3efc4fd8ff99b7c9a8786efa1fea789779bb133c619b97afae2f5b38590bb86/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 18:32:59 np0005535838 podman[91091]: 2025-11-25 23:32:59.607356593 +0000 UTC m=+0.096925491 container init 4adea0c725a0ee7ed4d56b00f30b101c6071ad14e4ab599d343ac613042f9d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 18:32:59 np0005535838 ceph-osd[90055]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 35.825 iops: 9171.129 elapsed_sec: 0.327
Nov 25 18:32:59 np0005535838 ceph-osd[90055]: log_channel(cluster) log [WRN] : OSD bench result of 9171.128788 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 18:32:59 np0005535838 ceph-osd[90055]: osd.1 0 waiting for initial osdmap
Nov 25 18:32:59 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-1[90051]: 2025-11-25T23:32:59.607+0000 7f5fc5d51640 -1 osd.1 0 waiting for initial osdmap
Nov 25 18:32:59 np0005535838 ceph-osd[90055]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 18:32:59 np0005535838 ceph-osd[90055]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 25 18:32:59 np0005535838 ceph-osd[90055]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 18:32:59 np0005535838 ceph-osd[90055]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Nov 25 18:32:59 np0005535838 podman[91091]: 2025-11-25 23:32:59.617672902 +0000 UTC m=+0.107241780 container start 4adea0c725a0ee7ed4d56b00f30b101c6071ad14e4ab599d343ac613042f9d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 18:32:59 np0005535838 bash[91091]: 4adea0c725a0ee7ed4d56b00f30b101c6071ad14e4ab599d343ac613042f9d40
Nov 25 18:32:59 np0005535838 podman[91091]: 2025-11-25 23:32:59.535014324 +0000 UTC m=+0.024583212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:32:59 np0005535838 systemd[1]: Started Ceph osd.2 for 101922db-575f-58e2-980f-928050464f69.
Nov 25 18:32:59 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-1[90051]: 2025-11-25T23:32:59.632+0000 7f5fc1379640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 18:32:59 np0005535838 ceph-osd[90055]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 18:32:59 np0005535838 ceph-osd[90055]: osd.1 12 set_numa_affinity not setting numa affinity
Nov 25 18:32:59 np0005535838 ceph-osd[90055]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: pidfile_write: ignore empty --pid-file
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: bdev(0x56223ce45800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: bdev(0x56223ce45800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: bdev(0x56223ce45800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: bdev(0x56223ce45800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: bdev(0x56223dc87800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: bdev(0x56223dc87800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: bdev(0x56223dc87800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: bdev(0x56223dc87800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: bdev(0x56223dc87800 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:32:59 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:32:59 np0005535838 ceph-osd[91111]: bdev(0x56223ce45800 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 18:32:59 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4021532597' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 18:33:00 np0005535838 elastic_johnson[91083]: 
Nov 25 18:33:00 np0005535838 elastic_johnson[91083]: {"fsid":"101922db-575f-58e2-980f-928050464f69","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":109,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":12,"num_osds":3,"num_up_osds":1,"osd_up_since":1764113576,"num_in_osds":3,"osd_in_since":1764113559,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":446984192,"bytes_avail":21023657984,"bytes_total":21470642176},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-25T23:32:57.989925+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 25 18:33:00 np0005535838 systemd[1]: libpod-93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974.scope: Deactivated successfully.
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 25 18:33:00 np0005535838 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1453008780; not ready for session (expect reconnect)
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 18:33:00 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: load: jerasure load: lrc 
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 18:33:00 np0005535838 podman[91276]: 2025-11-25 23:33:00.191566169 +0000 UTC m=+0.024286574 container died 93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974 (image=quay.io/ceph/ceph:v18, name=elastic_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 18:33:00 np0005535838 systemd[1]: var-lib-containers-storage-overlay-2eb4d003a71d0e07d19be191d06af2442c0497305bb1e87b30e80cb5fb9abf58-merged.mount: Deactivated successfully.
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Nov 25 18:33:00 np0005535838 podman[91276]: 2025-11-25 23:33:00.246070662 +0000 UTC m=+0.078791087 container remove 93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974 (image=quay.io/ceph/ceph:v18, name=elastic_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780] boot
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 18:33:00 np0005535838 systemd[1]: libpod-conmon-93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974.scope: Deactivated successfully.
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:33:00 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:33:00 np0005535838 ceph-osd[90055]: osd.1 13 state: booting -> active
Nov 25 18:33:00 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: OSD bench result of 9171.128788 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:00 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:00 np0005535838 podman[91302]: 2025-11-25 23:33:00.366281309 +0000 UTC m=+0.073500380 container create 5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hermann, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:00 np0005535838 systemd[1]: Started libpod-conmon-5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543.scope.
Nov 25 18:33:00 np0005535838 podman[91302]: 2025-11-25 23:33:00.333616116 +0000 UTC m=+0.040835237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:00 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 18:33:00 np0005535838 podman[91302]: 2025-11-25 23:33:00.48436384 +0000 UTC m=+0.191582961 container init 5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:00 np0005535838 podman[91302]: 2025-11-25 23:33:00.497506914 +0000 UTC m=+0.204725985 container start 5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hermann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:00 np0005535838 podman[91302]: 2025-11-25 23:33:00.501676053 +0000 UTC m=+0.208895194 container attach 5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 18:33:00 np0005535838 funny_hermann[91318]: 167 167
Nov 25 18:33:00 np0005535838 systemd[1]: libpod-5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543.scope: Deactivated successfully.
Nov 25 18:33:00 np0005535838 podman[91302]: 2025-11-25 23:33:00.505676736 +0000 UTC m=+0.212895777 container died 5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hermann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:00 np0005535838 systemd[1]: var-lib-containers-storage-overlay-825c979f592cd15ffb4ba87c7aa2f7094477f82e27d5ce872badf9220eb0d55d-merged.mount: Deactivated successfully.
Nov 25 18:33:00 np0005535838 podman[91302]: 2025-11-25 23:33:00.548165986 +0000 UTC m=+0.255385047 container remove 5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hermann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:00 np0005535838 systemd[1]: libpod-conmon-5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543.scope: Deactivated successfully.
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluefs mount
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluefs mount shared_bdev_used = 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: RocksDB version: 7.9.2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Git sha 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: DB SUMMARY
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: DB Session ID:  QPL9YOS3W6R72EW4HN3U
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: CURRENT file:  CURRENT
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                         Options.error_if_exists: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.create_if_missing: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                                     Options.env: 0x56223dcd9c70
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                                Options.info_log: 0x56223cecc8a0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                              Options.statistics: (nil)
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                               Options.use_fsync: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                              Options.db_log_dir: 
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.write_buffer_manager: 0x56223dde2460
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.unordered_write: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                               Options.row_cache: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                              Options.wal_filter: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.two_write_queues: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.wal_compression: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.atomic_flush: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.max_background_jobs: 4
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.max_background_compactions: -1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.max_subcompactions: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.max_open_files: -1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Compression algorithms supported:
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: #011kZSTD supported: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: #011kXpressCompression supported: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: #011kZlibCompression supported: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:00 np0005535838 podman[91372]: 2025-11-25 23:33:00.775609441 +0000 UTC m=+0.070489790 container create 0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yalow, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2f73a468-cc6b-47d0-b2f7-880fcbf5b7b1
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113580766822, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113580767032, "job": 1, "event": "recovery_finished"}
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: freelist init
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: freelist _read_cfg
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bluefs umount
Nov 25 18:33:00 np0005535838 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 18:33:00 np0005535838 systemd[1]: Started libpod-conmon-0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81.scope.
Nov 25 18:33:00 np0005535838 python3[91366]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:00 np0005535838 podman[91372]: 2025-11-25 23:33:00.745911986 +0000 UTC m=+0.040792365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:00 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ef8d5fc0aab03f52ed72e912e677db081d0e69e6fc4755a57d05a4899f6c19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ef8d5fc0aab03f52ed72e912e677db081d0e69e6fc4755a57d05a4899f6c19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ef8d5fc0aab03f52ed72e912e677db081d0e69e6fc4755a57d05a4899f6c19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ef8d5fc0aab03f52ed72e912e677db081d0e69e6fc4755a57d05a4899f6c19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:00 np0005535838 podman[91372]: 2025-11-25 23:33:00.872701296 +0000 UTC m=+0.167581715 container init 0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yalow, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:00 np0005535838 podman[91372]: 2025-11-25 23:33:00.884346699 +0000 UTC m=+0.179227038 container start 0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yalow, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 18:33:00 np0005535838 podman[91372]: 2025-11-25 23:33:00.888003384 +0000 UTC m=+0.182883733 container attach 0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:00 np0005535838 podman[91585]: 2025-11-25 23:33:00.903270553 +0000 UTC m=+0.061780423 container create 61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 18:33:00 np0005535838 systemd[1]: Started libpod-conmon-61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9.scope.
Nov 25 18:33:00 np0005535838 podman[91585]: 2025-11-25 23:33:00.868978968 +0000 UTC m=+0.027488838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:00 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22645ea58337fa75ac8ae6aabdfec1c0c8e15d8d6394f43cbf0f4fc36a0edcb3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22645ea58337fa75ac8ae6aabdfec1c0c8e15d8d6394f43cbf0f4fc36a0edcb3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:00 np0005535838 podman[91585]: 2025-11-25 23:33:00.989585406 +0000 UTC m=+0.148095316 container init 61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:00 np0005535838 podman[91585]: 2025-11-25 23:33:00.995911001 +0000 UTC m=+0.154420861 container start 61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:01 np0005535838 podman[91585]: 2025-11-25 23:33:01.000655215 +0000 UTC m=+0.159165095 container attach 61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: bluefs mount
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: bluefs mount shared_bdev_used = 4718592
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: RocksDB version: 7.9.2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Git sha 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: DB SUMMARY
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: DB Session ID:  QPL9YOS3W6R72EW4HN3V
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: CURRENT file:  CURRENT
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                         Options.error_if_exists: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.create_if_missing: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                                     Options.env: 0x56223de8ab60
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                                Options.info_log: 0x56223dcd5a20
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                              Options.statistics: (nil)
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                               Options.use_fsync: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                              Options.db_log_dir: 
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.write_buffer_manager: 0x56223dde26e0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.unordered_write: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                               Options.row_cache: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                              Options.wal_filter: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.two_write_queues: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.wal_compression: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.atomic_flush: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.max_background_jobs: 4
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.max_background_compactions: -1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.max_subcompactions: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.max_open_files: -1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Compression algorithms supported:
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: #011kZSTD supported: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: #011kXpressCompression supported: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: #011kZlibCompression supported: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56223ceb9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2f73a468-cc6b-47d0-b2f7-880fcbf5b7b1
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113581041856, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113581047549, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113581, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2f73a468-cc6b-47d0-b2f7-880fcbf5b7b1", "db_session_id": "QPL9YOS3W6R72EW4HN3V", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113581051046, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113581, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2f73a468-cc6b-47d0-b2f7-880fcbf5b7b1", "db_session_id": "QPL9YOS3W6R72EW4HN3V", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113581054833, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113581, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2f73a468-cc6b-47d0-b2f7-880fcbf5b7b1", "db_session_id": "QPL9YOS3W6R72EW4HN3V", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113581056787, "job": 1, "event": "recovery_finished"}
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56223d026000
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: DB pointer 0x56223ddcba00
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 460.80 MB usag
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: _get_class not permitted to load lua
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: _get_class not permitted to load sdk
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: _get_class not permitted to load test_remote_reads
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: osd.2 0 load_pgs
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: osd.2 0 load_pgs opened 0 pgs
Nov 25 18:33:01 np0005535838 ceph-osd[91111]: osd.2 0 log_to_monitors true
Nov 25 18:33:01 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-2[91107]: 2025-11-25T23:33:01.098+0000 7f7cf2338740 -1 osd.2 0 log_to_monitors true
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780] boot
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:33:01 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:33:01 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:01 np0005535838 ceph-mgr[75954]: [devicehealth INFO root] creating main.db for devicehealth
Nov 25 18:33:01 np0005535838 ceph-mgr[75954]: [devicehealth INFO root] Check health
Nov 25 18:33:01 np0005535838 ceph-mgr[75954]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 18:33:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4012357197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 18:33:01 np0005535838 brave_yalow[91582]: {
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "osd_id": 2,
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "type": "bluestore"
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:    },
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "osd_id": 1,
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "type": "bluestore"
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:    },
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "osd_id": 0,
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:        "type": "bluestore"
Nov 25 18:33:01 np0005535838 brave_yalow[91582]:    }
Nov 25 18:33:01 np0005535838 brave_yalow[91582]: }
Nov 25 18:33:01 np0005535838 systemd[1]: libpod-0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81.scope: Deactivated successfully.
Nov 25 18:33:01 np0005535838 podman[91372]: 2025-11-25 23:33:01.988051834 +0000 UTC m=+1.282932183 container died 0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yalow, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 18:33:01 np0005535838 systemd[1]: libpod-0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81.scope: Consumed 1.089s CPU time.
Nov 25 18:33:01 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 25 18:33:02 np0005535838 systemd[1]: var-lib-containers-storage-overlay-45ef8d5fc0aab03f52ed72e912e677db081d0e69e6fc4755a57d05a4899f6c19-merged.mount: Deactivated successfully.
Nov 25 18:33:02 np0005535838 podman[91372]: 2025-11-25 23:33:02.064959271 +0000 UTC m=+1.359839660 container remove 0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yalow, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 18:33:02 np0005535838 systemd[1]: libpod-conmon-0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81.scope: Deactivated successfully.
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:02 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 18:33:02 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4012357197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Nov 25 18:33:02 np0005535838 ceph-osd[91111]: osd.2 0 done with init, starting boot process
Nov 25 18:33:02 np0005535838 ceph-osd[91111]: osd.2 0 start_boot
Nov 25 18:33:02 np0005535838 ceph-osd[91111]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 18:33:02 np0005535838 ceph-osd[91111]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 18:33:02 np0005535838 ceph-osd[91111]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 18:33:02 np0005535838 ceph-osd[91111]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 18:33:02 np0005535838 ceph-osd[91111]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 25 18:33:02 np0005535838 gallant_goldberg[91602]: pool 'vms' created
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:33:02 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/4012357197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:02 np0005535838 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2972592351; not ready for session (expect reconnect)
Nov 25 18:33:02 np0005535838 systemd[1]: libpod-61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9.scope: Deactivated successfully.
Nov 25 18:33:02 np0005535838 podman[91585]: 2025-11-25 23:33:02.318638541 +0000 UTC m=+1.477148401 container died 61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.gwqfsl(active, since 66s)
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:33:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:33:02 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:33:02 np0005535838 systemd[1]: var-lib-containers-storage-overlay-22645ea58337fa75ac8ae6aabdfec1c0c8e15d8d6394f43cbf0f4fc36a0edcb3-merged.mount: Deactivated successfully.
Nov 25 18:33:02 np0005535838 podman[91585]: 2025-11-25 23:33:02.429532965 +0000 UTC m=+1.588042805 container remove 61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:02 np0005535838 systemd[1]: libpod-conmon-61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9.scope: Deactivated successfully.
Nov 25 18:33:02 np0005535838 python3[92061]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:02 np0005535838 podman[92089]: 2025-11-25 23:33:02.905116077 +0000 UTC m=+0.051671009 container create adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7 (image=quay.io/ceph/ceph:v18, name=focused_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:02 np0005535838 systemd[1]: Started libpod-conmon-adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7.scope.
Nov 25 18:33:02 np0005535838 podman[92089]: 2025-11-25 23:33:02.887795316 +0000 UTC m=+0.034350338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:02 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:02 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d085a551c01f9c012618accd6ccd03f4f0ad02cb851fe43aa5447daddd85da5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:02 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d085a551c01f9c012618accd6ccd03f4f0ad02cb851fe43aa5447daddd85da5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:03 np0005535838 podman[92089]: 2025-11-25 23:33:03.000310482 +0000 UTC m=+0.146865424 container init adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7 (image=quay.io/ceph/ceph:v18, name=focused_curie, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 18:33:03 np0005535838 podman[92089]: 2025-11-25 23:33:03.009991584 +0000 UTC m=+0.156546556 container start adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7 (image=quay.io/ceph/ceph:v18, name=focused_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:03 np0005535838 podman[92089]: 2025-11-25 23:33:03.016183616 +0000 UTC m=+0.162738558 container attach adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7 (image=quay.io/ceph/ceph:v18, name=focused_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:03 np0005535838 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2972592351; not ready for session (expect reconnect)
Nov 25 18:33:03 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:33:03 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:33:03 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:33:03 np0005535838 ceph-mon[75654]: from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 18:33:03 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/4012357197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 18:33:03 np0005535838 podman[92180]: 2025-11-25 23:33:03.36579041 +0000 UTC m=+0.086580471 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:03 np0005535838 podman[92180]: 2025-11-25 23:33:03.48073571 +0000 UTC m=+0.201525731 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 18:33:03 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 18:33:03 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2148731035' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 18:33:03 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v40: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:04 np0005535838 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2972592351; not ready for session (expect reconnect)
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:33:04 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2148731035' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Nov 25 18:33:04 np0005535838 focused_curie[92125]: pool 'volumes' created
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/2148731035' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:33:04 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:33:04 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:33:04 np0005535838 systemd[1]: libpod-adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7.scope: Deactivated successfully.
Nov 25 18:33:04 np0005535838 podman[92089]: 2025-11-25 23:33:04.364824713 +0000 UTC m=+1.511379675 container died adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7 (image=quay.io/ceph/ceph:v18, name=focused_curie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:04 np0005535838 systemd[1]: var-lib-containers-storage-overlay-3d085a551c01f9c012618accd6ccd03f4f0ad02cb851fe43aa5447daddd85da5-merged.mount: Deactivated successfully.
Nov 25 18:33:04 np0005535838 podman[92089]: 2025-11-25 23:33:04.449820911 +0000 UTC m=+1.596375853 container remove adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7 (image=quay.io/ceph/ceph:v18, name=focused_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 18:33:04 np0005535838 systemd[1]: libpod-conmon-adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7.scope: Deactivated successfully.
Nov 25 18:33:04 np0005535838 python3[92470]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:04 np0005535838 podman[92499]: 2025-11-25 23:33:04.813292197 +0000 UTC m=+0.043649220 container create 12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19 (image=quay.io/ceph/ceph:v18, name=musing_tharp, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:04 np0005535838 systemd[1]: Started libpod-conmon-12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19.scope.
Nov 25 18:33:04 np0005535838 podman[92500]: 2025-11-25 23:33:04.851236567 +0000 UTC m=+0.070175302 container create 2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:04 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:04 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45389f5750efdbd029f68dbb9ca2288cc69393e186d0b5c5cbcbe4a424ffad9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:04 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45389f5750efdbd029f68dbb9ca2288cc69393e186d0b5c5cbcbe4a424ffad9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:04 np0005535838 systemd[1]: Started libpod-conmon-2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06.scope.
Nov 25 18:33:04 np0005535838 podman[92499]: 2025-11-25 23:33:04.881359744 +0000 UTC m=+0.111716797 container init 12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19 (image=quay.io/ceph/ceph:v18, name=musing_tharp, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:04 np0005535838 podman[92499]: 2025-11-25 23:33:04.791879168 +0000 UTC m=+0.022236201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:04 np0005535838 podman[92499]: 2025-11-25 23:33:04.888552281 +0000 UTC m=+0.118909324 container start 12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19 (image=quay.io/ceph/ceph:v18, name=musing_tharp, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 18:33:04 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:04 np0005535838 podman[92499]: 2025-11-25 23:33:04.895947084 +0000 UTC m=+0.126304157 container attach 12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19 (image=quay.io/ceph/ceph:v18, name=musing_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:04 np0005535838 ceph-osd[91111]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.763 iops: 8387.282 elapsed_sec: 0.358
Nov 25 18:33:04 np0005535838 ceph-osd[91111]: log_channel(cluster) log [WRN] : OSD bench result of 8387.282290 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 18:33:04 np0005535838 podman[92500]: 2025-11-25 23:33:04.811395168 +0000 UTC m=+0.030333933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:04 np0005535838 ceph-osd[91111]: osd.2 0 waiting for initial osdmap
Nov 25 18:33:04 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-2[91107]: 2025-11-25T23:33:04.905+0000 7f7ceeacf640 -1 osd.2 0 waiting for initial osdmap
Nov 25 18:33:04 np0005535838 podman[92500]: 2025-11-25 23:33:04.909080706 +0000 UTC m=+0.128019441 container init 2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 18:33:04 np0005535838 podman[92500]: 2025-11-25 23:33:04.913841841 +0000 UTC m=+0.132780586 container start 2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 18:33:04 np0005535838 ceph-osd[91111]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 18:33:04 np0005535838 ceph-osd[91111]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 25 18:33:04 np0005535838 ceph-osd[91111]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 18:33:04 np0005535838 ceph-osd[91111]: osd.2 16 check_osdmap_features require_osd_release unknown -> reef
Nov 25 18:33:04 np0005535838 modest_tesla[92534]: 167 167
Nov 25 18:33:04 np0005535838 systemd[1]: libpod-2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06.scope: Deactivated successfully.
Nov 25 18:33:04 np0005535838 podman[92500]: 2025-11-25 23:33:04.921129631 +0000 UTC m=+0.140068366 container attach 2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:04 np0005535838 podman[92500]: 2025-11-25 23:33:04.921955963 +0000 UTC m=+0.140894678 container died 2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 18:33:04 np0005535838 ceph-osd[91111]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 18:33:04 np0005535838 ceph-osd[91111]: osd.2 16 set_numa_affinity not setting numa affinity
Nov 25 18:33:04 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-osd-2[91107]: 2025-11-25T23:33:04.931+0000 7f7ce98e0640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 18:33:04 np0005535838 ceph-osd[91111]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 25 18:33:04 np0005535838 systemd[1]: var-lib-containers-storage-overlay-bd3ab010e57c0a76640193618e37e3a410729a70eec1d6d28d6bbe0c64334b8c-merged.mount: Deactivated successfully.
Nov 25 18:33:04 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 16 pg[3.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:04 np0005535838 podman[92500]: 2025-11-25 23:33:04.963891967 +0000 UTC m=+0.182830682 container remove 2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 25 18:33:04 np0005535838 systemd[1]: libpod-conmon-2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06.scope: Deactivated successfully.
Nov 25 18:33:05 np0005535838 podman[92560]: 2025-11-25 23:33:05.097296149 +0000 UTC m=+0.037011197 container create b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 18:33:05 np0005535838 systemd[1]: Started libpod-conmon-b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59.scope.
Nov 25 18:33:05 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374f0b48235f07547c63280cb20ea184c43db0de5905c416c947c81d29e0fe41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374f0b48235f07547c63280cb20ea184c43db0de5905c416c947c81d29e0fe41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374f0b48235f07547c63280cb20ea184c43db0de5905c416c947c81d29e0fe41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374f0b48235f07547c63280cb20ea184c43db0de5905c416c947c81d29e0fe41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:05 np0005535838 podman[92560]: 2025-11-25 23:33:05.156518684 +0000 UTC m=+0.096233782 container init b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 18:33:05 np0005535838 podman[92560]: 2025-11-25 23:33:05.164977925 +0000 UTC m=+0.104692963 container start b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 18:33:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:33:05 np0005535838 podman[92560]: 2025-11-25 23:33:05.167960813 +0000 UTC m=+0.107675891 container attach b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 18:33:05 np0005535838 podman[92560]: 2025-11-25 23:33:05.08126848 +0000 UTC m=+0.020983548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:05 np0005535838 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2972592351; not ready for session (expect reconnect)
Nov 25 18:33:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:33:05 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:33:05 np0005535838 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 18:33:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 25 18:33:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Nov 25 18:33:05 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351] boot
Nov 25 18:33:05 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Nov 25 18:33:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 18:33:05 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 18:33:05 np0005535838 ceph-mon[75654]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 18:33:05 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/2148731035' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 18:33:05 np0005535838 ceph-mon[75654]: OSD bench result of 8387.282290 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 18:33:05 np0005535838 ceph-osd[91111]: osd.2 17 state: booting -> active
Nov 25 18:33:05 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[15,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:05 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 18:33:05 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2947604844' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 18:33:05 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v43: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351] boot
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/2947604844' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2947604844' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Nov 25 18:33:06 np0005535838 musing_tharp[92528]: pool 'backups' created
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Nov 25 18:33:06 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[15,17)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:06 np0005535838 systemd[1]: libpod-12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19.scope: Deactivated successfully.
Nov 25 18:33:06 np0005535838 podman[92499]: 2025-11-25 23:33:06.390345665 +0000 UTC m=+1.620702718 container died 12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19 (image=quay.io/ceph/ceph:v18, name=musing_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 18:33:06 np0005535838 systemd[1]: var-lib-containers-storage-overlay-45389f5750efdbd029f68dbb9ca2288cc69393e186d0b5c5cbcbe4a424ffad9e-merged.mount: Deactivated successfully.
Nov 25 18:33:06 np0005535838 podman[92499]: 2025-11-25 23:33:06.432454533 +0000 UTC m=+1.662811556 container remove 12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19 (image=quay.io/ceph/ceph:v18, name=musing_tharp, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:06 np0005535838 systemd[1]: libpod-conmon-12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19.scope: Deactivated successfully.
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]: [
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:    {
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:        "available": false,
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:        "ceph_device": false,
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:        "lsm_data": {},
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:        "lvs": [],
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:        "path": "/dev/sr0",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:        "rejected_reasons": [
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "Insufficient space (<5GB)",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "Has a FileSystem"
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:        ],
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:        "sys_api": {
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "actuators": null,
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "device_nodes": "sr0",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "devname": "sr0",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "human_readable_size": "482.00 KB",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "id_bus": "ata",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "model": "QEMU DVD-ROM",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "nr_requests": "2",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "parent": "/dev/sr0",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "partitions": {},
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "path": "/dev/sr0",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "removable": "1",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "rev": "2.5+",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "ro": "0",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "rotational": "1",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "sas_address": "",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "sas_device_handle": "",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "scheduler_mode": "mq-deadline",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "sectors": 0,
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "sectorsize": "2048",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "size": 493568.0,
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "support_discard": "2048",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "type": "disk",
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:            "vendor": "QEMU"
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:        }
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]:    }
Nov 25 18:33:06 np0005535838 sleepy_pascal[92577]: ]
Nov 25 18:33:06 np0005535838 systemd[1]: libpod-b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59.scope: Deactivated successfully.
Nov 25 18:33:06 np0005535838 podman[92560]: 2025-11-25 23:33:06.486555516 +0000 UTC m=+1.426270564 container died b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:06 np0005535838 systemd[1]: libpod-b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59.scope: Consumed 1.344s CPU time.
Nov 25 18:33:06 np0005535838 systemd[1]: var-lib-containers-storage-overlay-374f0b48235f07547c63280cb20ea184c43db0de5905c416c947c81d29e0fe41-merged.mount: Deactivated successfully.
Nov 25 18:33:06 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:06 np0005535838 podman[92560]: 2025-11-25 23:33:06.539062797 +0000 UTC m=+1.478777855 container remove b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:06 np0005535838 systemd[1]: libpod-conmon-b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59.scope: Deactivated successfully.
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 18:33:06 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Nov 25 18:33:06 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 25 18:33:06 np0005535838 ceph-mgr[75954]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 25 18:33:06 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:06 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 1354eb9c-04eb-4daa-b985-60786bec6847 does not exist
Nov 25 18:33:06 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 0d4b1bb6-3fe4-42c2-af98-0e58d1c5aa27 does not exist
Nov 25 18:33:06 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 84cd5a23-c4d2-4bc8-9ac8-3bc82ffdeca4 does not exist
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:33:06 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:33:06 np0005535838 python3[94439]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:06 np0005535838 podman[94496]: 2025-11-25 23:33:06.780158389 +0000 UTC m=+0.039359699 container create c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e (image=quay.io/ceph/ceph:v18, name=trusting_nobel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:06 np0005535838 systemd[1]: Started libpod-conmon-c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e.scope.
Nov 25 18:33:06 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:06 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a81d21b0c9b652a6a308411996b5d46834deac692be7557f52cfe10850c471f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:06 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a81d21b0c9b652a6a308411996b5d46834deac692be7557f52cfe10850c471f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:06 np0005535838 podman[94496]: 2025-11-25 23:33:06.766360328 +0000 UTC m=+0.025561658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:06 np0005535838 podman[94496]: 2025-11-25 23:33:06.863592045 +0000 UTC m=+0.122793385 container init c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e (image=quay.io/ceph/ceph:v18, name=trusting_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:06 np0005535838 podman[94496]: 2025-11-25 23:33:06.870454745 +0000 UTC m=+0.129656095 container start c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e (image=quay.io/ceph/ceph:v18, name=trusting_nobel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:06 np0005535838 podman[94496]: 2025-11-25 23:33:06.874261344 +0000 UTC m=+0.133462684 container attach c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e (image=quay.io/ceph/ceph:v18, name=trusting_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:07 np0005535838 podman[94599]: 2025-11-25 23:33:07.14309758 +0000 UTC m=+0.037177731 container create 96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:07 np0005535838 systemd[1]: Started libpod-conmon-96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e.scope.
Nov 25 18:33:07 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:07 np0005535838 podman[94599]: 2025-11-25 23:33:07.207951813 +0000 UTC m=+0.102031984 container init 96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:07 np0005535838 podman[94599]: 2025-11-25 23:33:07.21513494 +0000 UTC m=+0.109215091 container start 96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 18:33:07 np0005535838 epic_bohr[94617]: 167 167
Nov 25 18:33:07 np0005535838 systemd[1]: libpod-96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e.scope: Deactivated successfully.
Nov 25 18:33:07 np0005535838 podman[94599]: 2025-11-25 23:33:07.219087513 +0000 UTC m=+0.113167694 container attach 96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 18:33:07 np0005535838 podman[94599]: 2025-11-25 23:33:07.219835743 +0000 UTC m=+0.113915894 container died 96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:07 np0005535838 podman[94599]: 2025-11-25 23:33:07.125345497 +0000 UTC m=+0.019425668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:07 np0005535838 systemd[1]: var-lib-containers-storage-overlay-c61c6730fe8643d0fbea1f08bf1531670e9f447dce0266a11c55fc2680ebc05d-merged.mount: Deactivated successfully.
Nov 25 18:33:07 np0005535838 podman[94599]: 2025-11-25 23:33:07.249749084 +0000 UTC m=+0.143829235 container remove 96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:07 np0005535838 systemd[1]: libpod-conmon-96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e.scope: Deactivated successfully.
Nov 25 18:33:07 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 25 18:33:07 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 25 18:33:07 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 25 18:33:07 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/2947604844' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 18:33:07 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:07 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:07 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 18:33:07 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 18:33:07 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 18:33:07 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:33:07 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:07 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:33:07 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:07 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 18:33:07 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3214176959' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 18:33:07 np0005535838 podman[94658]: 2025-11-25 23:33:07.427860692 +0000 UTC m=+0.052226804 container create a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 18:33:07 np0005535838 systemd[1]: Started libpod-conmon-a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b.scope.
Nov 25 18:33:07 np0005535838 podman[94658]: 2025-11-25 23:33:07.408484006 +0000 UTC m=+0.032850148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:07 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb7572dfa03ddec6f7bc1dd5332f0d50a418e4cf0282dc1ff9e6cf5a4da2333/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb7572dfa03ddec6f7bc1dd5332f0d50a418e4cf0282dc1ff9e6cf5a4da2333/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb7572dfa03ddec6f7bc1dd5332f0d50a418e4cf0282dc1ff9e6cf5a4da2333/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb7572dfa03ddec6f7bc1dd5332f0d50a418e4cf0282dc1ff9e6cf5a4da2333/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb7572dfa03ddec6f7bc1dd5332f0d50a418e4cf0282dc1ff9e6cf5a4da2333/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:07 np0005535838 podman[94658]: 2025-11-25 23:33:07.515837348 +0000 UTC m=+0.140203480 container init a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:07 np0005535838 podman[94658]: 2025-11-25 23:33:07.525129231 +0000 UTC m=+0.149495333 container start a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 18:33:07 np0005535838 podman[94658]: 2025-11-25 23:33:07.528676943 +0000 UTC m=+0.153043045 container attach a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:07 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v46: 4 pgs: 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 25 18:33:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3214176959' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 18:33:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 25 18:33:08 np0005535838 trusting_nobel[94555]: pool 'images' created
Nov 25 18:33:08 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 25 18:33:08 np0005535838 ceph-mon[75654]: Adjusting osd_memory_target on compute-0 to 43690k
Nov 25 18:33:08 np0005535838 ceph-mon[75654]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 25 18:33:08 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3214176959' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 18:33:08 np0005535838 systemd[1]: libpod-c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e.scope: Deactivated successfully.
Nov 25 18:33:08 np0005535838 conmon[94555]: conmon c3e69e5fa3707aa6a37c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e.scope/container/memory.events
Nov 25 18:33:08 np0005535838 podman[94702]: 2025-11-25 23:33:08.495010502 +0000 UTC m=+0.042058779 container died c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e (image=quay.io/ceph/ceph:v18, name=trusting_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 25 18:33:08 np0005535838 systemd[1]: var-lib-containers-storage-overlay-8a81d21b0c9b652a6a308411996b5d46834deac692be7557f52cfe10850c471f-merged.mount: Deactivated successfully.
Nov 25 18:33:08 np0005535838 podman[94702]: 2025-11-25 23:33:08.537071591 +0000 UTC m=+0.084119858 container remove c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e (image=quay.io/ceph/ceph:v18, name=trusting_nobel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 18:33:08 np0005535838 systemd[1]: libpod-conmon-c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e.scope: Deactivated successfully.
Nov 25 18:33:08 np0005535838 vigorous_bartik[94678]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:33:08 np0005535838 vigorous_bartik[94678]: --> relative data size: 1.0
Nov 25 18:33:08 np0005535838 vigorous_bartik[94678]: --> All data devices are unavailable
Nov 25 18:33:08 np0005535838 systemd[1]: libpod-a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b.scope: Deactivated successfully.
Nov 25 18:33:08 np0005535838 podman[94658]: 2025-11-25 23:33:08.587777254 +0000 UTC m=+1.212143366 container died a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:08 np0005535838 systemd[1]: var-lib-containers-storage-overlay-0bb7572dfa03ddec6f7bc1dd5332f0d50a418e4cf0282dc1ff9e6cf5a4da2333-merged.mount: Deactivated successfully.
Nov 25 18:33:08 np0005535838 podman[94658]: 2025-11-25 23:33:08.639091242 +0000 UTC m=+1.263457344 container remove a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:33:08 np0005535838 systemd[1]: libpod-conmon-a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b.scope: Deactivated successfully.
Nov 25 18:33:08 np0005535838 python3[94783]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:08 np0005535838 podman[94842]: 2025-11-25 23:33:08.908271568 +0000 UTC m=+0.049127383 container create c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39 (image=quay.io/ceph/ceph:v18, name=wizardly_swanson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:08 np0005535838 systemd[1]: Started libpod-conmon-c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39.scope.
Nov 25 18:33:08 np0005535838 podman[94842]: 2025-11-25 23:33:08.883630345 +0000 UTC m=+0.024486250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:08 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:09 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9da050e78968385c17d86bd9934a095c93c62dfe03048f6adcbf1408a81c8d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:09 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9da050e78968385c17d86bd9934a095c93c62dfe03048f6adcbf1408a81c8d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:09 np0005535838 podman[94842]: 2025-11-25 23:33:09.011963234 +0000 UTC m=+0.152819149 container init c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39 (image=quay.io/ceph/ceph:v18, name=wizardly_swanson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 18:33:09 np0005535838 podman[94842]: 2025-11-25 23:33:09.024016548 +0000 UTC m=+0.164872373 container start c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39 (image=quay.io/ceph/ceph:v18, name=wizardly_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:09 np0005535838 podman[94842]: 2025-11-25 23:33:09.030559019 +0000 UTC m=+0.171415074 container attach c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39 (image=quay.io/ceph/ceph:v18, name=wizardly_swanson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 18:33:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:09 np0005535838 podman[94918]: 2025-11-25 23:33:09.22371498 +0000 UTC m=+0.040831246 container create 49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:33:09 np0005535838 systemd[1]: Started libpod-conmon-49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9.scope.
Nov 25 18:33:09 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:09 np0005535838 podman[94918]: 2025-11-25 23:33:09.29190467 +0000 UTC m=+0.109020936 container init 49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 18:33:09 np0005535838 podman[94918]: 2025-11-25 23:33:09.299209611 +0000 UTC m=+0.116325877 container start 49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:09 np0005535838 podman[94918]: 2025-11-25 23:33:09.207777515 +0000 UTC m=+0.024893801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:09 np0005535838 podman[94918]: 2025-11-25 23:33:09.302353502 +0000 UTC m=+0.119469768 container attach 49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:09 np0005535838 heuristic_borg[94936]: 167 167
Nov 25 18:33:09 np0005535838 systemd[1]: libpod-49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9.scope: Deactivated successfully.
Nov 25 18:33:09 np0005535838 conmon[94936]: conmon 49d5f6a3b9b309113529 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9.scope/container/memory.events
Nov 25 18:33:09 np0005535838 podman[94918]: 2025-11-25 23:33:09.30493129 +0000 UTC m=+0.122047566 container died 49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:09 np0005535838 systemd[1]: var-lib-containers-storage-overlay-d65ba2f3699399f3c6879ba765f405421efbafad40fb55bc43344fde056530ed-merged.mount: Deactivated successfully.
Nov 25 18:33:09 np0005535838 podman[94918]: 2025-11-25 23:33:09.336417811 +0000 UTC m=+0.153534077 container remove 49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 18:33:09 np0005535838 systemd[1]: libpod-conmon-49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9.scope: Deactivated successfully.
Nov 25 18:33:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 25 18:33:09 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3214176959' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 18:33:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 25 18:33:09 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 25 18:33:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:09 np0005535838 podman[94978]: 2025-11-25 23:33:09.515044713 +0000 UTC m=+0.043225839 container create 6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 18:33:09 np0005535838 systemd[1]: Started libpod-conmon-6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619.scope.
Nov 25 18:33:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 18:33:09 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4149178136' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 18:33:09 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:09 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3573148e0770a4f4548f09008db23f629ae09809e0b0d6773aa2f716a04190/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:09 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3573148e0770a4f4548f09008db23f629ae09809e0b0d6773aa2f716a04190/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:09 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3573148e0770a4f4548f09008db23f629ae09809e0b0d6773aa2f716a04190/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:09 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3573148e0770a4f4548f09008db23f629ae09809e0b0d6773aa2f716a04190/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:09 np0005535838 podman[94978]: 2025-11-25 23:33:09.498484851 +0000 UTC m=+0.026665987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:09 np0005535838 podman[94978]: 2025-11-25 23:33:09.597907376 +0000 UTC m=+0.126088532 container init 6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bartik, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 18:33:09 np0005535838 podman[94978]: 2025-11-25 23:33:09.603227965 +0000 UTC m=+0.131409081 container start 6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bartik, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:09 np0005535838 podman[94978]: 2025-11-25 23:33:09.606951262 +0000 UTC m=+0.135132388 container attach 6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bartik, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 25 18:33:09 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v49: 5 pgs: 1 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:10 np0005535838 ceph-mon[75654]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 18:33:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]: {
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:    "0": [
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:        {
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "devices": [
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "/dev/loop3"
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            ],
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_name": "ceph_lv0",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_size": "21470642176",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "name": "ceph_lv0",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "tags": {
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.crush_device_class": "",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.encrypted": "0",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.osd_id": "0",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.type": "block",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.vdo": "0"
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            },
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "type": "block",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "vg_name": "ceph_vg0"
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:        }
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:    ],
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:    "1": [
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:        {
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "devices": [
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "/dev/loop4"
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            ],
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_name": "ceph_lv1",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_size": "21470642176",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "name": "ceph_lv1",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "tags": {
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.crush_device_class": "",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.encrypted": "0",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.osd_id": "1",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.type": "block",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.vdo": "0"
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            },
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "type": "block",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "vg_name": "ceph_vg1"
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:        }
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:    ],
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:    "2": [
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:        {
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "devices": [
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "/dev/loop5"
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            ],
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_name": "ceph_lv2",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_size": "21470642176",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "name": "ceph_lv2",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "tags": {
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.crush_device_class": "",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.encrypted": "0",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.osd_id": "2",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.type": "block",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:                "ceph.vdo": "0"
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            },
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "type": "block",
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:            "vg_name": "ceph_vg2"
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:        }
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]:    ]
Nov 25 18:33:10 np0005535838 stoic_bartik[94995]: }
Nov 25 18:33:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 25 18:33:10 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4149178136' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 18:33:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 25 18:33:10 np0005535838 wizardly_swanson[94875]: pool 'cephfs.cephfs.meta' created
Nov 25 18:33:10 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 25 18:33:10 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/4149178136' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 18:33:10 np0005535838 ceph-mon[75654]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 18:33:10 np0005535838 systemd[1]: libpod-6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619.scope: Deactivated successfully.
Nov 25 18:33:10 np0005535838 podman[94978]: 2025-11-25 23:33:10.441272417 +0000 UTC m=+0.969453553 container died 6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 18:33:10 np0005535838 systemd[1]: libpod-c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39.scope: Deactivated successfully.
Nov 25 18:33:10 np0005535838 podman[94842]: 2025-11-25 23:33:10.449956923 +0000 UTC m=+1.590812738 container died c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39 (image=quay.io/ceph/ceph:v18, name=wizardly_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 18:33:10 np0005535838 systemd[1]: var-lib-containers-storage-overlay-5f3573148e0770a4f4548f09008db23f629ae09809e0b0d6773aa2f716a04190-merged.mount: Deactivated successfully.
Nov 25 18:33:10 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b9da050e78968385c17d86bd9934a095c93c62dfe03048f6adcbf1408a81c8d9-merged.mount: Deactivated successfully.
Nov 25 18:33:10 np0005535838 podman[94978]: 2025-11-25 23:33:10.519257961 +0000 UTC m=+1.047439077 container remove 6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bartik, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:10 np0005535838 podman[94842]: 2025-11-25 23:33:10.527362253 +0000 UTC m=+1.668218068 container remove c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39 (image=quay.io/ceph/ceph:v18, name=wizardly_swanson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 18:33:10 np0005535838 systemd[1]: libpod-conmon-6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619.scope: Deactivated successfully.
Nov 25 18:33:10 np0005535838 systemd[1]: libpod-conmon-c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39.scope: Deactivated successfully.
Nov 25 18:33:10 np0005535838 python3[95125]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:10 np0005535838 podman[95156]: 2025-11-25 23:33:10.97135567 +0000 UTC m=+0.068941520 container create 7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373 (image=quay.io/ceph/ceph:v18, name=trusting_faraday, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 18:33:11 np0005535838 systemd[1]: Started libpod-conmon-7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373.scope.
Nov 25 18:33:11 np0005535838 podman[95156]: 2025-11-25 23:33:10.947117768 +0000 UTC m=+0.044703718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:11 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56d34920ed697ec855e458a6680b7134e6eda06ed8dd4d76975ac69081dbfad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56d34920ed697ec855e458a6680b7134e6eda06ed8dd4d76975ac69081dbfad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:11 np0005535838 podman[95156]: 2025-11-25 23:33:11.068822794 +0000 UTC m=+0.166408764 container init 7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373 (image=quay.io/ceph/ceph:v18, name=trusting_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 18:33:11 np0005535838 podman[95156]: 2025-11-25 23:33:11.078920077 +0000 UTC m=+0.176505927 container start 7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373 (image=quay.io/ceph/ceph:v18, name=trusting_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 18:33:11 np0005535838 podman[95156]: 2025-11-25 23:33:11.082591564 +0000 UTC m=+0.180177454 container attach 7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373 (image=quay.io/ceph/ceph:v18, name=trusting_faraday, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:11 np0005535838 podman[95213]: 2025-11-25 23:33:11.262010956 +0000 UTC m=+0.065593793 container create 938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 18:33:11 np0005535838 systemd[1]: Started libpod-conmon-938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645.scope.
Nov 25 18:33:11 np0005535838 podman[95213]: 2025-11-25 23:33:11.233255555 +0000 UTC m=+0.036838452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:11 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:11 np0005535838 podman[95213]: 2025-11-25 23:33:11.355807934 +0000 UTC m=+0.159390791 container init 938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:11 np0005535838 podman[95213]: 2025-11-25 23:33:11.363483834 +0000 UTC m=+0.167066681 container start 938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_franklin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 18:33:11 np0005535838 podman[95213]: 2025-11-25 23:33:11.367483858 +0000 UTC m=+0.171066705 container attach 938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 25 18:33:11 np0005535838 inspiring_franklin[95230]: 167 167
Nov 25 18:33:11 np0005535838 systemd[1]: libpod-938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645.scope: Deactivated successfully.
Nov 25 18:33:11 np0005535838 podman[95213]: 2025-11-25 23:33:11.372114299 +0000 UTC m=+0.175697146 container died 938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 18:33:11 np0005535838 systemd[1]: var-lib-containers-storage-overlay-e39d43bcf002a7d0ad93d272ff05b226ff31da6f4424d70e2324b3b6ce29ee22-merged.mount: Deactivated successfully.
Nov 25 18:33:11 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:11 np0005535838 podman[95213]: 2025-11-25 23:33:11.409211497 +0000 UTC m=+0.212794344 container remove 938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 18:33:11 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 25 18:33:11 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/4149178136' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 18:33:11 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 25 18:33:11 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 25 18:33:11 np0005535838 systemd[1]: libpod-conmon-938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645.scope: Deactivated successfully.
Nov 25 18:33:11 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:11 np0005535838 podman[95272]: 2025-11-25 23:33:11.600252003 +0000 UTC m=+0.057685526 container create 64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:33:11 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 18:33:11 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1077117350' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 18:33:11 np0005535838 systemd[1]: Started libpod-conmon-64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8.scope.
Nov 25 18:33:11 np0005535838 podman[95272]: 2025-11-25 23:33:11.569364377 +0000 UTC m=+0.026797950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:11 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8495845a22e298db098055c0a12dc52fb2ea1534470dba76046893639ef7588/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8495845a22e298db098055c0a12dc52fb2ea1534470dba76046893639ef7588/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8495845a22e298db098055c0a12dc52fb2ea1534470dba76046893639ef7588/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8495845a22e298db098055c0a12dc52fb2ea1534470dba76046893639ef7588/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:11 np0005535838 podman[95272]: 2025-11-25 23:33:11.704384131 +0000 UTC m=+0.161817644 container init 64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:11 np0005535838 podman[95272]: 2025-11-25 23:33:11.721034316 +0000 UTC m=+0.178467829 container start 64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 18:33:11 np0005535838 podman[95272]: 2025-11-25 23:33:11.724985378 +0000 UTC m=+0.182418901 container attach 64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:11 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v52: 6 pgs: 1 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 25 18:33:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1077117350' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 18:33:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 25 18:33:12 np0005535838 trusting_faraday[95190]: pool 'cephfs.cephfs.data' created
Nov 25 18:33:12 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/1077117350' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 18:33:12 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 25 18:33:12 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:12 np0005535838 systemd[1]: libpod-7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373.scope: Deactivated successfully.
Nov 25 18:33:12 np0005535838 podman[95309]: 2025-11-25 23:33:12.545568144 +0000 UTC m=+0.031546694 container died 7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373 (image=quay.io/ceph/ceph:v18, name=trusting_faraday, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:12 np0005535838 systemd[1]: var-lib-containers-storage-overlay-d56d34920ed697ec855e458a6680b7134e6eda06ed8dd4d76975ac69081dbfad-merged.mount: Deactivated successfully.
Nov 25 18:33:12 np0005535838 podman[95309]: 2025-11-25 23:33:12.597581822 +0000 UTC m=+0.083560382 container remove 7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373 (image=quay.io/ceph/ceph:v18, name=trusting_faraday, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:12 np0005535838 systemd[1]: libpod-conmon-7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373.scope: Deactivated successfully.
Nov 25 18:33:12 np0005535838 epic_banzai[95292]: {
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "osd_id": 2,
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "type": "bluestore"
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:    },
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "osd_id": 1,
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "type": "bluestore"
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:    },
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "osd_id": 0,
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:        "type": "bluestore"
Nov 25 18:33:12 np0005535838 epic_banzai[95292]:    }
Nov 25 18:33:12 np0005535838 epic_banzai[95292]: }
Nov 25 18:33:12 np0005535838 systemd[1]: libpod-64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8.scope: Deactivated successfully.
Nov 25 18:33:12 np0005535838 podman[95272]: 2025-11-25 23:33:12.73889126 +0000 UTC m=+1.196324753 container died 64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:12 np0005535838 systemd[1]: libpod-64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8.scope: Consumed 1.020s CPU time.
Nov 25 18:33:12 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b8495845a22e298db098055c0a12dc52fb2ea1534470dba76046893639ef7588-merged.mount: Deactivated successfully.
Nov 25 18:33:12 np0005535838 podman[95272]: 2025-11-25 23:33:12.8071234 +0000 UTC m=+1.264556893 container remove 64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 18:33:12 np0005535838 systemd[1]: libpod-conmon-64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8.scope: Deactivated successfully.
Nov 25 18:33:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:12 np0005535838 python3[95374]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:13 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 18:33:13 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:33:13 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 18:33:13 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 18:33:13 np0005535838 podman[95421]: 2025-11-25 23:33:13.077160567 +0000 UTC m=+0.062484822 container create da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc (image=quay.io/ceph/ceph:v18, name=flamboyant_jang, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:13 np0005535838 systemd[1]: Started libpod-conmon-da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc.scope.
Nov 25 18:33:13 np0005535838 podman[95421]: 2025-11-25 23:33:13.044236439 +0000 UTC m=+0.029560744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:13 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:13 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6be747229267e684140fdeaa16253ef6eb3ee8e0ed150a1ee3196b0f2a645a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:13 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6be747229267e684140fdeaa16253ef6eb3ee8e0ed150a1ee3196b0f2a645a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:13 np0005535838 podman[95421]: 2025-11-25 23:33:13.181822509 +0000 UTC m=+0.167146794 container init da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc (image=quay.io/ceph/ceph:v18, name=flamboyant_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:13 np0005535838 podman[95421]: 2025-11-25 23:33:13.18954254 +0000 UTC m=+0.174866805 container start da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc (image=quay.io/ceph/ceph:v18, name=flamboyant_jang, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:13 np0005535838 podman[95421]: 2025-11-25 23:33:13.193462533 +0000 UTC m=+0.178786768 container attach da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc (image=quay.io/ceph/ceph:v18, name=flamboyant_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/1077117350' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 18:33:13 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:13 np0005535838 podman[95579]: 2025-11-25 23:33:13.658045517 +0000 UTC m=+0.048295011 container create ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 18:33:13 np0005535838 systemd[1]: Started libpod-conmon-ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd.scope.
Nov 25 18:33:13 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:13 np0005535838 podman[95579]: 2025-11-25 23:33:13.630823897 +0000 UTC m=+0.021073441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:13 np0005535838 podman[95579]: 2025-11-25 23:33:13.741082594 +0000 UTC m=+0.131332088 container init ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 18:33:13 np0005535838 podman[95579]: 2025-11-25 23:33:13.748469328 +0000 UTC m=+0.138718832 container start ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 18:33:13 np0005535838 podman[95579]: 2025-11-25 23:33:13.75242204 +0000 UTC m=+0.142671534 container attach ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:13 np0005535838 competent_diffie[95595]: 167 167
Nov 25 18:33:13 np0005535838 systemd[1]: libpod-ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd.scope: Deactivated successfully.
Nov 25 18:33:13 np0005535838 conmon[95595]: conmon ebd55332a902302297ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd.scope/container/memory.events
Nov 25 18:33:13 np0005535838 podman[95579]: 2025-11-25 23:33:13.755130451 +0000 UTC m=+0.145379925 container died ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3830652392' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 25 18:33:13 np0005535838 systemd[1]: var-lib-containers-storage-overlay-cd18963a384e11a008cc8fa1ecf0180f97aa6779f3b9b9abc7a8666fc4c178ef-merged.mount: Deactivated successfully.
Nov 25 18:33:13 np0005535838 podman[95579]: 2025-11-25 23:33:13.802015645 +0000 UTC m=+0.192265139 container remove ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:13 np0005535838 systemd[1]: libpod-conmon-ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd.scope: Deactivated successfully.
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:13 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.gwqfsl (unknown last config time)...
Nov 25 18:33:13 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.gwqfsl (unknown last config time)...
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.gwqfsl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.gwqfsl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:33:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:33:13 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.gwqfsl on compute-0
Nov 25 18:33:13 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.gwqfsl on compute-0
Nov 25 18:33:13 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3830652392' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 25 18:33:14 np0005535838 flamboyant_jang[95460]: enabled application 'rbd' on pool 'vms'
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3830652392' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: Reconfiguring mgr.compute-0.gwqfsl (unknown last config time)...
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.gwqfsl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: Reconfiguring daemon mgr.compute-0.gwqfsl on compute-0
Nov 25 18:33:14 np0005535838 podman[95731]: 2025-11-25 23:33:14.511517202 +0000 UTC m=+0.075661807 container create 3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilbur, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:14 np0005535838 systemd[1]: libpod-da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc.scope: Deactivated successfully.
Nov 25 18:33:14 np0005535838 podman[95421]: 2025-11-25 23:33:14.52526313 +0000 UTC m=+1.510587355 container died da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc (image=quay.io/ceph/ceph:v18, name=flamboyant_jang, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:14 np0005535838 systemd[1]: Started libpod-conmon-3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5.scope.
Nov 25 18:33:14 np0005535838 podman[95731]: 2025-11-25 23:33:14.48158043 +0000 UTC m=+0.045725095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:14 np0005535838 podman[95421]: 2025-11-25 23:33:14.573816597 +0000 UTC m=+1.559140822 container remove da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc (image=quay.io/ceph/ceph:v18, name=flamboyant_jang, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 18:33:14 np0005535838 systemd[1]: var-lib-containers-storage-overlay-0d6be747229267e684140fdeaa16253ef6eb3ee8e0ed150a1ee3196b0f2a645a-merged.mount: Deactivated successfully.
Nov 25 18:33:14 np0005535838 systemd[1]: libpod-conmon-da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc.scope: Deactivated successfully.
Nov 25 18:33:14 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:14 np0005535838 podman[95731]: 2025-11-25 23:33:14.621481681 +0000 UTC m=+0.185626336 container init 3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilbur, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 18:33:14 np0005535838 podman[95731]: 2025-11-25 23:33:14.631567624 +0000 UTC m=+0.195712239 container start 3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilbur, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 18:33:14 np0005535838 podman[95731]: 2025-11-25 23:33:14.635153588 +0000 UTC m=+0.199298203 container attach 3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:14 np0005535838 thirsty_wilbur[95760]: 167 167
Nov 25 18:33:14 np0005535838 systemd[1]: libpod-3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5.scope: Deactivated successfully.
Nov 25 18:33:14 np0005535838 podman[95731]: 2025-11-25 23:33:14.637627703 +0000 UTC m=+0.201772348 container died 3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:14 np0005535838 systemd[1]: var-lib-containers-storage-overlay-31b8adea9c525b0760fd26a1f31c84a6aed6f296a2544568ed706b8e3f308b2c-merged.mount: Deactivated successfully.
Nov 25 18:33:14 np0005535838 podman[95731]: 2025-11-25 23:33:14.686326333 +0000 UTC m=+0.250470918 container remove 3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:33:14 np0005535838 systemd[1]: libpod-conmon-3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5.scope: Deactivated successfully.
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:14 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:14 np0005535838 python3[95816]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:15 np0005535838 podman[95878]: 2025-11-25 23:33:15.024681864 +0000 UTC m=+0.048264870 container create 2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93 (image=quay.io/ceph/ceph:v18, name=magical_morse, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:15 np0005535838 systemd[1]: Started libpod-conmon-2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93.scope.
Nov 25 18:33:15 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:15 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b960c4b878cd61dad40846722b8b4e6734b164926da79649f03c875f9081bb2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:15 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b960c4b878cd61dad40846722b8b4e6734b164926da79649f03c875f9081bb2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:15 np0005535838 podman[95878]: 2025-11-25 23:33:15.007578308 +0000 UTC m=+0.031161334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:15 np0005535838 podman[95878]: 2025-11-25 23:33:15.120234047 +0000 UTC m=+0.143817153 container init 2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93 (image=quay.io/ceph/ceph:v18, name=magical_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:15 np0005535838 podman[95878]: 2025-11-25 23:33:15.158145136 +0000 UTC m=+0.181728142 container start 2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93 (image=quay.io/ceph/ceph:v18, name=magical_morse, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 18:33:15 np0005535838 ceph-mon[75654]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 18:33:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:33:15 np0005535838 podman[95878]: 2025-11-25 23:33:15.206214412 +0000 UTC m=+0.229797418 container attach 2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93 (image=quay.io/ceph/ceph:v18, name=magical_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 18:33:15 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3830652392' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 25 18:33:15 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:15 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:15 np0005535838 ceph-mon[75654]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 18:33:15 np0005535838 podman[96009]: 2025-11-25 23:33:15.539701315 +0000 UTC m=+0.064481144 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 25 18:33:15 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3213799370' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 25 18:33:15 np0005535838 podman[96030]: 2025-11-25 23:33:15.713256244 +0000 UTC m=+0.050405846 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:15 np0005535838 podman[96009]: 2025-11-25 23:33:15.725594986 +0000 UTC m=+0.250374785 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:33:15 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:16 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:16 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:16 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:16 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:16 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 25 18:33:16 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3213799370' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 25 18:33:16 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:16 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:16 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3213799370' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 25 18:33:16 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 25 18:33:16 np0005535838 magical_morse[95919]: enabled application 'rbd' on pool 'volumes'
Nov 25 18:33:16 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 25 18:33:16 np0005535838 systemd[1]: libpod-2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93.scope: Deactivated successfully.
Nov 25 18:33:16 np0005535838 podman[95878]: 2025-11-25 23:33:16.554390987 +0000 UTC m=+1.577974023 container died 2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93 (image=quay.io/ceph/ceph:v18, name=magical_morse, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:16 np0005535838 systemd[1]: var-lib-containers-storage-overlay-7b960c4b878cd61dad40846722b8b4e6734b164926da79649f03c875f9081bb2-merged.mount: Deactivated successfully.
Nov 25 18:33:16 np0005535838 podman[95878]: 2025-11-25 23:33:16.612692418 +0000 UTC m=+1.636275454 container remove 2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93 (image=quay.io/ceph/ceph:v18, name=magical_morse, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:33:16 np0005535838 systemd[1]: libpod-conmon-2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93.scope: Deactivated successfully.
Nov 25 18:33:16 np0005535838 python3[96285]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:17 np0005535838 podman[96291]: 2025-11-25 23:33:17.045454702 +0000 UTC m=+0.043400964 container create 5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a (image=quay.io/ceph/ceph:v18, name=reverent_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:17 np0005535838 systemd[1]: Started libpod-conmon-5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a.scope.
Nov 25 18:33:17 np0005535838 podman[96291]: 2025-11-25 23:33:17.026673722 +0000 UTC m=+0.024619964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:17 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:17 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08664d14b6666a80ea3693e346ecea1108531eba1e998d326b7b554dbb29dfd0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:17 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08664d14b6666a80ea3693e346ecea1108531eba1e998d326b7b554dbb29dfd0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:17 np0005535838 podman[96291]: 2025-11-25 23:33:17.147745031 +0000 UTC m=+0.145691333 container init 5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a (image=quay.io/ceph/ceph:v18, name=reverent_morse, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 18:33:17 np0005535838 podman[96291]: 2025-11-25 23:33:17.156206292 +0000 UTC m=+0.154152564 container start 5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a (image=quay.io/ceph/ceph:v18, name=reverent_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:17 np0005535838 podman[96291]: 2025-11-25 23:33:17.160886815 +0000 UTC m=+0.158833057 container attach 5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a (image=quay.io/ceph/ceph:v18, name=reverent_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:17 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 23bf84ef-361a-4ee5-bdda-3f9ad241eab9 does not exist
Nov 25 18:33:17 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev f814aae9-65f2-488e-ad35-b3ab939564d6 does not exist
Nov 25 18:33:17 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev c0e7b792-149e-453c-af2a-a48181c78716 does not exist
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3213799370' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 25 18:33:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1809223002' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 25 18:33:17 np0005535838 podman[96484]: 2025-11-25 23:33:17.894604014 +0000 UTC m=+0.062549344 container create 1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 18:33:17 np0005535838 systemd[1]: Started libpod-conmon-1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a.scope.
Nov 25 18:33:17 np0005535838 podman[96484]: 2025-11-25 23:33:17.868587744 +0000 UTC m=+0.036533124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:17 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:17 np0005535838 podman[96484]: 2025-11-25 23:33:17.993828363 +0000 UTC m=+0.161773743 container init 1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 18:33:17 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:18 np0005535838 podman[96484]: 2025-11-25 23:33:18.004750188 +0000 UTC m=+0.172695508 container start 1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 18:33:18 np0005535838 charming_wing[96500]: 167 167
Nov 25 18:33:18 np0005535838 systemd[1]: libpod-1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a.scope: Deactivated successfully.
Nov 25 18:33:18 np0005535838 podman[96484]: 2025-11-25 23:33:18.0102103 +0000 UTC m=+0.178155630 container attach 1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 18:33:18 np0005535838 podman[96484]: 2025-11-25 23:33:18.011493614 +0000 UTC m=+0.179439004 container died 1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:18 np0005535838 systemd[1]: var-lib-containers-storage-overlay-5f0cf45ef9b48cce7bfb03c8362aac1b493ff4de6403b3ae60f8652e186c83ae-merged.mount: Deactivated successfully.
Nov 25 18:33:18 np0005535838 podman[96484]: 2025-11-25 23:33:18.058004667 +0000 UTC m=+0.225949997 container remove 1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 18:33:18 np0005535838 systemd[1]: libpod-conmon-1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a.scope: Deactivated successfully.
Nov 25 18:33:18 np0005535838 podman[96522]: 2025-11-25 23:33:18.273582894 +0000 UTC m=+0.049458532 container create df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kalam, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 18:33:18 np0005535838 systemd[1]: Started libpod-conmon-df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c.scope.
Nov 25 18:33:18 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:18 np0005535838 podman[96522]: 2025-11-25 23:33:18.257403982 +0000 UTC m=+0.033279650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:18 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fafcd303bc7bac1124d7aeb25dc1779f1613daa61d01c0c535cad8dc5e1f6fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:18 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fafcd303bc7bac1124d7aeb25dc1779f1613daa61d01c0c535cad8dc5e1f6fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:18 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fafcd303bc7bac1124d7aeb25dc1779f1613daa61d01c0c535cad8dc5e1f6fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:18 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fafcd303bc7bac1124d7aeb25dc1779f1613daa61d01c0c535cad8dc5e1f6fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:18 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fafcd303bc7bac1124d7aeb25dc1779f1613daa61d01c0c535cad8dc5e1f6fe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:18 np0005535838 podman[96522]: 2025-11-25 23:33:18.368250775 +0000 UTC m=+0.144126453 container init df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kalam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:18 np0005535838 podman[96522]: 2025-11-25 23:33:18.379347184 +0000 UTC m=+0.155222852 container start df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 18:33:18 np0005535838 podman[96522]: 2025-11-25 23:33:18.383509393 +0000 UTC m=+0.159385071 container attach df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kalam, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:18 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 25 18:33:18 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/1809223002' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 25 18:33:18 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1809223002' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 25 18:33:18 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 25 18:33:18 np0005535838 reverent_morse[96308]: enabled application 'rbd' on pool 'backups'
Nov 25 18:33:18 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 25 18:33:18 np0005535838 systemd[1]: libpod-5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a.scope: Deactivated successfully.
Nov 25 18:33:18 np0005535838 podman[96291]: 2025-11-25 23:33:18.579801396 +0000 UTC m=+1.577747628 container died 5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a (image=quay.io/ceph/ceph:v18, name=reverent_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 18:33:18 np0005535838 systemd[1]: var-lib-containers-storage-overlay-08664d14b6666a80ea3693e346ecea1108531eba1e998d326b7b554dbb29dfd0-merged.mount: Deactivated successfully.
Nov 25 18:33:18 np0005535838 podman[96291]: 2025-11-25 23:33:18.625399716 +0000 UTC m=+1.623345948 container remove 5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a (image=quay.io/ceph/ceph:v18, name=reverent_morse, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 18:33:18 np0005535838 systemd[1]: libpod-conmon-5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a.scope: Deactivated successfully.
Nov 25 18:33:18 np0005535838 python3[96580]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:19 np0005535838 podman[96581]: 2025-11-25 23:33:19.07615729 +0000 UTC m=+0.060123760 container create b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2 (image=quay.io/ceph/ceph:v18, name=eager_hopper, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:19 np0005535838 systemd[1]: Started libpod-conmon-b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2.scope.
Nov 25 18:33:19 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:19 np0005535838 podman[96581]: 2025-11-25 23:33:19.049413702 +0000 UTC m=+0.033380162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:19 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651d1514fbbf65ff3d9a38c9e592d82cf338b07fe78f454c96746c850a4d4927/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:19 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651d1514fbbf65ff3d9a38c9e592d82cf338b07fe78f454c96746c850a4d4927/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:19 np0005535838 podman[96581]: 2025-11-25 23:33:19.164330621 +0000 UTC m=+0.148297151 container init b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2 (image=quay.io/ceph/ceph:v18, name=eager_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:33:19 np0005535838 podman[96581]: 2025-11-25 23:33:19.175220685 +0000 UTC m=+0.159187125 container start b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2 (image=quay.io/ceph/ceph:v18, name=eager_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:19 np0005535838 podman[96581]: 2025-11-25 23:33:19.178311296 +0000 UTC m=+0.162277766 container attach b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2 (image=quay.io/ceph/ceph:v18, name=eager_hopper, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:19 np0005535838 cranky_kalam[96538]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:33:19 np0005535838 cranky_kalam[96538]: --> relative data size: 1.0
Nov 25 18:33:19 np0005535838 cranky_kalam[96538]: --> All data devices are unavailable
Nov 25 18:33:19 np0005535838 systemd[1]: libpod-df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c.scope: Deactivated successfully.
Nov 25 18:33:19 np0005535838 podman[96522]: 2025-11-25 23:33:19.504399997 +0000 UTC m=+1.280275725 container died df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:19 np0005535838 systemd[1]: libpod-df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c.scope: Consumed 1.039s CPU time.
Nov 25 18:33:19 np0005535838 systemd[1]: var-lib-containers-storage-overlay-8fafcd303bc7bac1124d7aeb25dc1779f1613daa61d01c0c535cad8dc5e1f6fe-merged.mount: Deactivated successfully.
Nov 25 18:33:19 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/1809223002' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 25 18:33:19 np0005535838 podman[96522]: 2025-11-25 23:33:19.585081352 +0000 UTC m=+1.360957030 container remove df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kalam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 18:33:19 np0005535838 systemd[1]: libpod-conmon-df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c.scope: Deactivated successfully.
Nov 25 18:33:19 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 25 18:33:19 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3856405420' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 25 18:33:19 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:20 np0005535838 ceph-mon[75654]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 18:33:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:33:20 np0005535838 podman[96796]: 2025-11-25 23:33:20.294756563 +0000 UTC m=+0.068441587 container create f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:20 np0005535838 systemd[1]: Started libpod-conmon-f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558.scope.
Nov 25 18:33:20 np0005535838 podman[96796]: 2025-11-25 23:33:20.26860923 +0000 UTC m=+0.042294304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:20 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:20 np0005535838 podman[96796]: 2025-11-25 23:33:20.395127163 +0000 UTC m=+0.168812177 container init f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pike, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:20 np0005535838 podman[96796]: 2025-11-25 23:33:20.400646327 +0000 UTC m=+0.174331321 container start f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pike, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 18:33:20 np0005535838 podman[96796]: 2025-11-25 23:33:20.403311976 +0000 UTC m=+0.176996980 container attach f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pike, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 25 18:33:20 np0005535838 exciting_pike[96813]: 167 167
Nov 25 18:33:20 np0005535838 systemd[1]: libpod-f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558.scope: Deactivated successfully.
Nov 25 18:33:20 np0005535838 podman[96796]: 2025-11-25 23:33:20.407005772 +0000 UTC m=+0.180690756 container died f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:33:20 np0005535838 systemd[1]: var-lib-containers-storage-overlay-ff435bbc29d11eef0a91d42f5b04e5a7729e24133b8da56bdd2236dd18a975b8-merged.mount: Deactivated successfully.
Nov 25 18:33:20 np0005535838 podman[96796]: 2025-11-25 23:33:20.444113701 +0000 UTC m=+0.217798685 container remove f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pike, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 18:33:20 np0005535838 systemd[1]: libpod-conmon-f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558.scope: Deactivated successfully.
Nov 25 18:33:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 25 18:33:20 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3856405420' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 25 18:33:20 np0005535838 ceph-mon[75654]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 18:33:20 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3856405420' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 25 18:33:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 25 18:33:20 np0005535838 eager_hopper[96604]: enabled application 'rbd' on pool 'images'
Nov 25 18:33:20 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 25 18:33:20 np0005535838 systemd[1]: libpod-b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2.scope: Deactivated successfully.
Nov 25 18:33:20 np0005535838 podman[96581]: 2025-11-25 23:33:20.594249529 +0000 UTC m=+1.578216009 container died b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2 (image=quay.io/ceph/ceph:v18, name=eager_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 18:33:20 np0005535838 systemd[1]: var-lib-containers-storage-overlay-651d1514fbbf65ff3d9a38c9e592d82cf338b07fe78f454c96746c850a4d4927-merged.mount: Deactivated successfully.
Nov 25 18:33:20 np0005535838 podman[96581]: 2025-11-25 23:33:20.644652865 +0000 UTC m=+1.628619335 container remove b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2 (image=quay.io/ceph/ceph:v18, name=eager_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:20 np0005535838 systemd[1]: libpod-conmon-b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2.scope: Deactivated successfully.
Nov 25 18:33:20 np0005535838 podman[96837]: 2025-11-25 23:33:20.663753103 +0000 UTC m=+0.065034278 container create 86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nightingale, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:33:20 np0005535838 systemd[1]: Started libpod-conmon-86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9.scope.
Nov 25 18:33:20 np0005535838 podman[96837]: 2025-11-25 23:33:20.63989481 +0000 UTC m=+0.041176055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:20 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c952a3e99946f977fdee3c1fc10355e3fad81915a19039e28d469ff27044da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c952a3e99946f977fdee3c1fc10355e3fad81915a19039e28d469ff27044da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c952a3e99946f977fdee3c1fc10355e3fad81915a19039e28d469ff27044da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c952a3e99946f977fdee3c1fc10355e3fad81915a19039e28d469ff27044da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:20 np0005535838 podman[96837]: 2025-11-25 23:33:20.779918395 +0000 UTC m=+0.181199560 container init 86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 18:33:20 np0005535838 podman[96837]: 2025-11-25 23:33:20.797604636 +0000 UTC m=+0.198885801 container start 86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 18:33:20 np0005535838 podman[96837]: 2025-11-25 23:33:20.800663246 +0000 UTC m=+0.201944401 container attach 86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:21 np0005535838 python3[96895]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:21 np0005535838 podman[96896]: 2025-11-25 23:33:21.085113039 +0000 UTC m=+0.051765391 container create 07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063 (image=quay.io/ceph/ceph:v18, name=gracious_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 25 18:33:21 np0005535838 systemd[1]: Started libpod-conmon-07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063.scope.
Nov 25 18:33:21 np0005535838 podman[96896]: 2025-11-25 23:33:21.068737032 +0000 UTC m=+0.035389404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:21 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:21 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740a3ac6852b92104c2bd77aa8d6a6dc09f18e862347efccb3c5543ccd4cf719/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:21 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740a3ac6852b92104c2bd77aa8d6a6dc09f18e862347efccb3c5543ccd4cf719/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:21 np0005535838 podman[96896]: 2025-11-25 23:33:21.192918413 +0000 UTC m=+0.159570785 container init 07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063 (image=quay.io/ceph/ceph:v18, name=gracious_antonelli, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 18:33:21 np0005535838 podman[96896]: 2025-11-25 23:33:21.201966659 +0000 UTC m=+0.168619011 container start 07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063 (image=quay.io/ceph/ceph:v18, name=gracious_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 18:33:21 np0005535838 podman[96896]: 2025-11-25 23:33:21.204923166 +0000 UTC m=+0.171575528 container attach 07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063 (image=quay.io/ceph/ceph:v18, name=gracious_antonelli, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:21 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/3856405420' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]: {
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:    "0": [
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:        {
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "devices": [
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "/dev/loop3"
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            ],
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_name": "ceph_lv0",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_size": "21470642176",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "name": "ceph_lv0",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "tags": {
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.crush_device_class": "",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.encrypted": "0",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.osd_id": "0",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.type": "block",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.vdo": "0"
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            },
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "type": "block",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "vg_name": "ceph_vg0"
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:        }
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:    ],
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:    "1": [
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:        {
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "devices": [
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "/dev/loop4"
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            ],
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_name": "ceph_lv1",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_size": "21470642176",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "name": "ceph_lv1",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "tags": {
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.crush_device_class": "",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.encrypted": "0",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.osd_id": "1",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.type": "block",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.vdo": "0"
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            },
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "type": "block",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "vg_name": "ceph_vg1"
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:        }
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:    ],
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:    "2": [
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:        {
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "devices": [
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "/dev/loop5"
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            ],
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_name": "ceph_lv2",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_size": "21470642176",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "name": "ceph_lv2",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "tags": {
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.crush_device_class": "",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.encrypted": "0",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.osd_id": "2",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.type": "block",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:                "ceph.vdo": "0"
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            },
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "type": "block",
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:            "vg_name": "ceph_vg2"
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:        }
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]:    ]
Nov 25 18:33:21 np0005535838 nostalgic_nightingale[96865]: }
Nov 25 18:33:21 np0005535838 systemd[1]: libpod-86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9.scope: Deactivated successfully.
Nov 25 18:33:21 np0005535838 podman[96837]: 2025-11-25 23:33:21.698461677 +0000 UTC m=+1.099742882 container died 86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nightingale, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 18:33:21 np0005535838 systemd[1]: var-lib-containers-storage-overlay-c0c952a3e99946f977fdee3c1fc10355e3fad81915a19039e28d469ff27044da-merged.mount: Deactivated successfully.
Nov 25 18:33:21 np0005535838 podman[96837]: 2025-11-25 23:33:21.768901005 +0000 UTC m=+1.170182200 container remove 86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 18:33:21 np0005535838 systemd[1]: libpod-conmon-86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9.scope: Deactivated successfully.
Nov 25 18:33:21 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:22 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 25 18:33:22 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/403499466' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 25 18:33:22 np0005535838 podman[97090]: 2025-11-25 23:33:22.582039247 +0000 UTC m=+0.067672308 container create e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 18:33:22 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 25 18:33:22 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/403499466' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 25 18:33:22 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/403499466' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 25 18:33:22 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 25 18:33:22 np0005535838 gracious_antonelli[96911]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 25 18:33:22 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 25 18:33:22 np0005535838 systemd[1]: Started libpod-conmon-e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75.scope.
Nov 25 18:33:22 np0005535838 systemd[1]: libpod-07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063.scope: Deactivated successfully.
Nov 25 18:33:22 np0005535838 podman[96896]: 2025-11-25 23:33:22.636693873 +0000 UTC m=+1.603346255 container died 07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063 (image=quay.io/ceph/ceph:v18, name=gracious_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:22 np0005535838 podman[97090]: 2025-11-25 23:33:22.555362401 +0000 UTC m=+0.040995522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:22 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:22 np0005535838 systemd[1]: var-lib-containers-storage-overlay-740a3ac6852b92104c2bd77aa8d6a6dc09f18e862347efccb3c5543ccd4cf719-merged.mount: Deactivated successfully.
Nov 25 18:33:22 np0005535838 podman[97090]: 2025-11-25 23:33:22.680121586 +0000 UTC m=+0.165754657 container init e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galileo, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 18:33:22 np0005535838 podman[96896]: 2025-11-25 23:33:22.687836437 +0000 UTC m=+1.654488809 container remove 07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063 (image=quay.io/ceph/ceph:v18, name=gracious_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 18:33:22 np0005535838 podman[97090]: 2025-11-25 23:33:22.693232448 +0000 UTC m=+0.178865489 container start e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 18:33:22 np0005535838 funny_galileo[97108]: 167 167
Nov 25 18:33:22 np0005535838 podman[97090]: 2025-11-25 23:33:22.69824711 +0000 UTC m=+0.183880181 container attach e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galileo, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:22 np0005535838 systemd[1]: libpod-e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75.scope: Deactivated successfully.
Nov 25 18:33:22 np0005535838 podman[97090]: 2025-11-25 23:33:22.700310163 +0000 UTC m=+0.185943224 container died e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galileo, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 18:33:22 np0005535838 systemd[1]: libpod-conmon-07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063.scope: Deactivated successfully.
Nov 25 18:33:22 np0005535838 systemd[1]: var-lib-containers-storage-overlay-2e5ea64f790e378b5c844542ec1861dfee15f9b4cf5b653be3025f0181fe6254-merged.mount: Deactivated successfully.
Nov 25 18:33:22 np0005535838 podman[97090]: 2025-11-25 23:33:22.73967079 +0000 UTC m=+0.225303831 container remove e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:22 np0005535838 systemd[1]: libpod-conmon-e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75.scope: Deactivated successfully.
Nov 25 18:33:22 np0005535838 podman[97164]: 2025-11-25 23:33:22.93433055 +0000 UTC m=+0.068928609 container create 2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:22 np0005535838 systemd[1]: Started libpod-conmon-2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645.scope.
Nov 25 18:33:23 np0005535838 podman[97164]: 2025-11-25 23:33:22.907306555 +0000 UTC m=+0.041904674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:23 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979c8244437182f93748b4f6f26de37e6814ec09c9ce40957d32d21edf4702af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979c8244437182f93748b4f6f26de37e6814ec09c9ce40957d32d21edf4702af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979c8244437182f93748b4f6f26de37e6814ec09c9ce40957d32d21edf4702af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979c8244437182f93748b4f6f26de37e6814ec09c9ce40957d32d21edf4702af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:23 np0005535838 podman[97164]: 2025-11-25 23:33:23.034611018 +0000 UTC m=+0.169209127 container init 2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:23 np0005535838 python3[97177]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:23 np0005535838 podman[97164]: 2025-11-25 23:33:23.051583711 +0000 UTC m=+0.186181780 container start 2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:23 np0005535838 podman[97164]: 2025-11-25 23:33:23.055673927 +0000 UTC m=+0.190271986 container attach 2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 18:33:23 np0005535838 podman[97190]: 2025-11-25 23:33:23.120035667 +0000 UTC m=+0.057392959 container create d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5 (image=quay.io/ceph/ceph:v18, name=funny_zhukovsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:23 np0005535838 systemd[1]: Started libpod-conmon-d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5.scope.
Nov 25 18:33:23 np0005535838 podman[97190]: 2025-11-25 23:33:23.091523562 +0000 UTC m=+0.028880904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:23 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14531388e0cc002b2340a6c35766c6439123a636233799a961217a7c62d6c323/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:23 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14531388e0cc002b2340a6c35766c6439123a636233799a961217a7c62d6c323/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:23 np0005535838 podman[97190]: 2025-11-25 23:33:23.222013909 +0000 UTC m=+0.159371191 container init d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5 (image=quay.io/ceph/ceph:v18, name=funny_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 18:33:23 np0005535838 podman[97190]: 2025-11-25 23:33:23.227185783 +0000 UTC m=+0.164543045 container start d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5 (image=quay.io/ceph/ceph:v18, name=funny_zhukovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 18:33:23 np0005535838 podman[97190]: 2025-11-25 23:33:23.230724096 +0000 UTC m=+0.168081438 container attach d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5 (image=quay.io/ceph/ceph:v18, name=funny_zhukovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 18:33:23 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/403499466' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 25 18:33:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 25 18:33:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2286248551' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 25 18:33:23 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:24 np0005535838 cool_joliot[97186]: {
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "osd_id": 2,
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "type": "bluestore"
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:    },
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "osd_id": 1,
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "type": "bluestore"
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:    },
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "osd_id": 0,
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:        "type": "bluestore"
Nov 25 18:33:24 np0005535838 cool_joliot[97186]:    }
Nov 25 18:33:24 np0005535838 cool_joliot[97186]: }
Nov 25 18:33:24 np0005535838 systemd[1]: libpod-2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645.scope: Deactivated successfully.
Nov 25 18:33:24 np0005535838 systemd[1]: libpod-2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645.scope: Consumed 1.150s CPU time.
Nov 25 18:33:24 np0005535838 podman[97259]: 2025-11-25 23:33:24.254467403 +0000 UTC m=+0.033045773 container died 2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:24 np0005535838 systemd[1]: var-lib-containers-storage-overlay-979c8244437182f93748b4f6f26de37e6814ec09c9ce40957d32d21edf4702af-merged.mount: Deactivated successfully.
Nov 25 18:33:24 np0005535838 podman[97259]: 2025-11-25 23:33:24.334840761 +0000 UTC m=+0.113419101 container remove 2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:24 np0005535838 systemd[1]: libpod-conmon-2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645.scope: Deactivated successfully.
Nov 25 18:33:24 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:24 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:24 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:24 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:24 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 25 18:33:24 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/2286248551' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 25 18:33:24 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:24 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:24 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2286248551' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 25 18:33:24 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 25 18:33:24 np0005535838 funny_zhukovsky[97207]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 25 18:33:24 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 25 18:33:24 np0005535838 systemd[1]: libpod-d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5.scope: Deactivated successfully.
Nov 25 18:33:24 np0005535838 podman[97190]: 2025-11-25 23:33:24.660769007 +0000 UTC m=+1.598126299 container died d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5 (image=quay.io/ceph/ceph:v18, name=funny_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 18:33:24 np0005535838 systemd[1]: var-lib-containers-storage-overlay-14531388e0cc002b2340a6c35766c6439123a636233799a961217a7c62d6c323-merged.mount: Deactivated successfully.
Nov 25 18:33:24 np0005535838 podman[97190]: 2025-11-25 23:33:24.712582989 +0000 UTC m=+1.649940281 container remove d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5 (image=quay.io/ceph/ceph:v18, name=funny_zhukovsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 18:33:24 np0005535838 systemd[1]: libpod-conmon-d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5.scope: Deactivated successfully.
Nov 25 18:33:25 np0005535838 ceph-mon[75654]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 18:33:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:33:25 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/2286248551' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 25 18:33:25 np0005535838 ceph-mon[75654]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 18:33:25 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:33:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:33:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:33:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:33:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:33:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:33:26 np0005535838 python3[97414]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:33:26 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 18:33:26 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 18:33:26 np0005535838 python3[97485]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764113606.1408372-36784-76276289501161/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:33:27 np0005535838 python3[97535]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:27 np0005535838 podman[97536]: 2025-11-25 23:33:27.504090662 +0000 UTC m=+0.048584219 container create 6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1 (image=quay.io/ceph/ceph:v18, name=practical_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:33:27 np0005535838 systemd[1]: Started libpod-conmon-6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1.scope.
Nov 25 18:33:27 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:27 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bba69982c9e489d6b7336acb9b659f5e40e1cf4ba17b46578d045de6f960062/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:27 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bba69982c9e489d6b7336acb9b659f5e40e1cf4ba17b46578d045de6f960062/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:27 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bba69982c9e489d6b7336acb9b659f5e40e1cf4ba17b46578d045de6f960062/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:27 np0005535838 podman[97536]: 2025-11-25 23:33:27.481029981 +0000 UTC m=+0.025523598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:27 np0005535838 podman[97536]: 2025-11-25 23:33:27.587963371 +0000 UTC m=+0.132456978 container init 6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1 (image=quay.io/ceph/ceph:v18, name=practical_mendel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:27 np0005535838 podman[97536]: 2025-11-25 23:33:27.601827743 +0000 UTC m=+0.146321300 container start 6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1 (image=quay.io/ceph/ceph:v18, name=practical_mendel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 18:33:27 np0005535838 podman[97536]: 2025-11-25 23:33:27.606492664 +0000 UTC m=+0.150986231 container attach 6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1 (image=quay.io/ceph/ceph:v18, name=practical_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:27 np0005535838 ceph-mon[75654]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 18:33:27 np0005535838 ceph-mon[75654]: Cluster is now healthy
Nov 25 18:33:27 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:28 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14240 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:33:28 np0005535838 ceph-mgr[75954]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 25 18:33:28 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0[75650]: 2025-11-25T23:33:28.178+0000 7efe59ee9640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).mds e2 new map
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T23:33:28.179667+0000#012modified#0112025-11-25T23:33:28.179713+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 25 18:33:28 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 25 18:33:28 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:28 np0005535838 ceph-mgr[75954]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 25 18:33:28 np0005535838 systemd[1]: libpod-6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1.scope: Deactivated successfully.
Nov 25 18:33:28 np0005535838 podman[97536]: 2025-11-25 23:33:28.221003382 +0000 UTC m=+0.765496919 container died 6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1 (image=quay.io/ceph/ceph:v18, name=practical_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 25 18:33:28 np0005535838 systemd[1]: var-lib-containers-storage-overlay-8bba69982c9e489d6b7336acb9b659f5e40e1cf4ba17b46578d045de6f960062-merged.mount: Deactivated successfully.
Nov 25 18:33:28 np0005535838 podman[97536]: 2025-11-25 23:33:28.266373235 +0000 UTC m=+0.810866772 container remove 6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1 (image=quay.io/ceph/ceph:v18, name=practical_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:33:28 np0005535838 systemd[1]: libpod-conmon-6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1.scope: Deactivated successfully.
Nov 25 18:33:28 np0005535838 python3[97692]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: Saving service mds.cephfs spec with placement compute-0
Nov 25 18:33:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:28 np0005535838 podman[97713]: 2025-11-25 23:33:28.715702422 +0000 UTC m=+0.056372522 container create 403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27 (image=quay.io/ceph/ceph:v18, name=upbeat_faraday, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 18:33:28 np0005535838 systemd[1]: Started libpod-conmon-403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27.scope.
Nov 25 18:33:28 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:28 np0005535838 podman[97713]: 2025-11-25 23:33:28.697436906 +0000 UTC m=+0.038107026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:28 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46cc0d4b09f50605bc909e3178ba8b0718e0e42999faceb683eee5fd0fffe13/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:28 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46cc0d4b09f50605bc909e3178ba8b0718e0e42999faceb683eee5fd0fffe13/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:28 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46cc0d4b09f50605bc909e3178ba8b0718e0e42999faceb683eee5fd0fffe13/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:28 np0005535838 podman[97713]: 2025-11-25 23:33:28.808906285 +0000 UTC m=+0.149576415 container init 403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27 (image=quay.io/ceph/ceph:v18, name=upbeat_faraday, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 18:33:28 np0005535838 podman[97713]: 2025-11-25 23:33:28.816892974 +0000 UTC m=+0.157563104 container start 403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27 (image=quay.io/ceph/ceph:v18, name=upbeat_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:28 np0005535838 podman[97713]: 2025-11-25 23:33:28.821194566 +0000 UTC m=+0.161864666 container attach 403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27 (image=quay.io/ceph/ceph:v18, name=upbeat_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:29 np0005535838 podman[97805]: 2025-11-25 23:33:29.241855544 +0000 UTC m=+0.091529839 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 25 18:33:29 np0005535838 podman[97805]: 2025-11-25 23:33:29.355546051 +0000 UTC m=+0.205220276 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:29 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14242 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:33:29 np0005535838 ceph-mgr[75954]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 25 18:33:29 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:29 np0005535838 upbeat_faraday[97735]: Scheduled mds.cephfs update...
Nov 25 18:33:29 np0005535838 systemd[1]: libpod-403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27.scope: Deactivated successfully.
Nov 25 18:33:29 np0005535838 podman[97713]: 2025-11-25 23:33:29.401453969 +0000 UTC m=+0.742124089 container died 403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27 (image=quay.io/ceph/ceph:v18, name=upbeat_faraday, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:29 np0005535838 systemd[1]: var-lib-containers-storage-overlay-a46cc0d4b09f50605bc909e3178ba8b0718e0e42999faceb683eee5fd0fffe13-merged.mount: Deactivated successfully.
Nov 25 18:33:29 np0005535838 podman[97713]: 2025-11-25 23:33:29.459589557 +0000 UTC m=+0.800259647 container remove 403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27 (image=quay.io/ceph/ceph:v18, name=upbeat_faraday, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:29 np0005535838 systemd[1]: libpod-conmon-403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27.scope: Deactivated successfully.
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:29 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev eab50d6d-27b9-4921-b139-e4358bee461f does not exist
Nov 25 18:33:29 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 96ddb597-69f5-4b9f-b6b9-3935e370cd77 does not exist
Nov 25 18:33:29 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev f0dd84be-34cc-4bc4-b40c-75fbdcd85099 does not exist
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:33:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:33:29 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:33:30 np0005535838 python3[98081]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 18:33:30 np0005535838 ceph-mon[75654]: Saving service mds.cephfs spec with placement compute-0
Nov 25 18:33:30 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:30 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:30 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:30 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:33:30 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:30 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:33:30 np0005535838 podman[98247]: 2025-11-25 23:33:30.699608978 +0000 UTC m=+0.038348131 container create 99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:33:30 np0005535838 systemd[1]: Started libpod-conmon-99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca.scope.
Nov 25 18:33:30 np0005535838 python3[98221]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764113609.944351-36815-146667851931385/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=7118a3e4848d5b96f84dfc7266d24215d2762b5c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:33:30 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:30 np0005535838 podman[98247]: 2025-11-25 23:33:30.764876512 +0000 UTC m=+0.103615705 container init 99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:33:30 np0005535838 podman[98247]: 2025-11-25 23:33:30.772150512 +0000 UTC m=+0.110889655 container start 99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:30 np0005535838 podman[98247]: 2025-11-25 23:33:30.775732036 +0000 UTC m=+0.114471209 container attach 99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:30 np0005535838 distracted_satoshi[98263]: 167 167
Nov 25 18:33:30 np0005535838 systemd[1]: libpod-99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca.scope: Deactivated successfully.
Nov 25 18:33:30 np0005535838 podman[98247]: 2025-11-25 23:33:30.681187838 +0000 UTC m=+0.019927021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:30 np0005535838 podman[98247]: 2025-11-25 23:33:30.777379818 +0000 UTC m=+0.116118971 container died 99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 18:33:30 np0005535838 systemd[1]: var-lib-containers-storage-overlay-f201bbbf946f56abbf7ed016d66c91b97da9dee700d4ffaa58d02e911c408bae-merged.mount: Deactivated successfully.
Nov 25 18:33:30 np0005535838 podman[98247]: 2025-11-25 23:33:30.822035154 +0000 UTC m=+0.160774337 container remove 99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 18:33:30 np0005535838 systemd[1]: libpod-conmon-99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca.scope: Deactivated successfully.
Nov 25 18:33:30 np0005535838 podman[98314]: 2025-11-25 23:33:30.999103355 +0000 UTC m=+0.044336758 container create b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 18:33:31 np0005535838 systemd[1]: Started libpod-conmon-b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c.scope.
Nov 25 18:33:31 np0005535838 podman[98314]: 2025-11-25 23:33:30.983748244 +0000 UTC m=+0.028981657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:31 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:31 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1587881158edb7f83a467d6e1d89e8d0ebe775c5b24fc6fb5c6d413efe312b0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:31 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1587881158edb7f83a467d6e1d89e8d0ebe775c5b24fc6fb5c6d413efe312b0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:31 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1587881158edb7f83a467d6e1d89e8d0ebe775c5b24fc6fb5c6d413efe312b0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:31 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1587881158edb7f83a467d6e1d89e8d0ebe775c5b24fc6fb5c6d413efe312b0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:31 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1587881158edb7f83a467d6e1d89e8d0ebe775c5b24fc6fb5c6d413efe312b0e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:31 np0005535838 podman[98314]: 2025-11-25 23:33:31.106859227 +0000 UTC m=+0.152092700 container init b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:31 np0005535838 podman[98314]: 2025-11-25 23:33:31.124672122 +0000 UTC m=+0.169905545 container start b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 18:33:31 np0005535838 podman[98314]: 2025-11-25 23:33:31.128854761 +0000 UTC m=+0.174088194 container attach b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:31 np0005535838 python3[98361]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:31 np0005535838 podman[98362]: 2025-11-25 23:33:31.414457195 +0000 UTC m=+0.057576764 container create 56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134 (image=quay.io/ceph/ceph:v18, name=distracted_benz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 18:33:31 np0005535838 systemd[1]: Started libpod-conmon-56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134.scope.
Nov 25 18:33:31 np0005535838 podman[98362]: 2025-11-25 23:33:31.384938595 +0000 UTC m=+0.028058214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:31 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:31 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387e6c0558e425423a82f67bfc159d001809ff45419813f3ce54746f0e5bf3ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:31 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387e6c0558e425423a82f67bfc159d001809ff45419813f3ce54746f0e5bf3ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:31 np0005535838 podman[98362]: 2025-11-25 23:33:31.514610318 +0000 UTC m=+0.157729917 container init 56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134 (image=quay.io/ceph/ceph:v18, name=distracted_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:31 np0005535838 podman[98362]: 2025-11-25 23:33:31.523511691 +0000 UTC m=+0.166631260 container start 56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134 (image=quay.io/ceph/ceph:v18, name=distracted_benz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 18:33:31 np0005535838 podman[98362]: 2025-11-25 23:33:31.527959237 +0000 UTC m=+0.171078856 container attach 56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134 (image=quay.io/ceph/ceph:v18, name=distracted_benz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:31 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 25 18:33:32 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/92955043' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 25 18:33:32 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/92955043' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 25 18:33:32 np0005535838 systemd[1]: libpod-56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134.scope: Deactivated successfully.
Nov 25 18:33:32 np0005535838 podman[98422]: 2025-11-25 23:33:32.192950552 +0000 UTC m=+0.029742457 container died 56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134 (image=quay.io/ceph/ceph:v18, name=distracted_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:32 np0005535838 systemd[1]: var-lib-containers-storage-overlay-387e6c0558e425423a82f67bfc159d001809ff45419813f3ce54746f0e5bf3ca-merged.mount: Deactivated successfully.
Nov 25 18:33:32 np0005535838 festive_tesla[98331]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:33:32 np0005535838 festive_tesla[98331]: --> relative data size: 1.0
Nov 25 18:33:32 np0005535838 festive_tesla[98331]: --> All data devices are unavailable
Nov 25 18:33:32 np0005535838 systemd[1]: libpod-b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c.scope: Deactivated successfully.
Nov 25 18:33:32 np0005535838 systemd[1]: libpod-b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c.scope: Consumed 1.063s CPU time.
Nov 25 18:33:32 np0005535838 podman[98422]: 2025-11-25 23:33:32.264683324 +0000 UTC m=+0.101475159 container remove 56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134 (image=quay.io/ceph/ceph:v18, name=distracted_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:32 np0005535838 systemd[1]: libpod-conmon-56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134.scope: Deactivated successfully.
Nov 25 18:33:32 np0005535838 podman[98441]: 2025-11-25 23:33:32.311540167 +0000 UTC m=+0.040559849 container died b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:32 np0005535838 systemd[1]: var-lib-containers-storage-overlay-1587881158edb7f83a467d6e1d89e8d0ebe775c5b24fc6fb5c6d413efe312b0e-merged.mount: Deactivated successfully.
Nov 25 18:33:32 np0005535838 podman[98441]: 2025-11-25 23:33:32.364237552 +0000 UTC m=+0.093257164 container remove b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:32 np0005535838 systemd[1]: libpod-conmon-b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c.scope: Deactivated successfully.
Nov 25 18:33:32 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/92955043' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 25 18:33:32 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/92955043' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 25 18:33:33 np0005535838 python3[98606]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:33 np0005535838 podman[98621]: 2025-11-25 23:33:33.17302415 +0000 UTC m=+0.065136041 container create a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goodall, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:33 np0005535838 systemd[1]: Started libpod-conmon-a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017.scope.
Nov 25 18:33:33 np0005535838 podman[98621]: 2025-11-25 23:33:33.146338564 +0000 UTC m=+0.038450495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:33 np0005535838 podman[98636]: 2025-11-25 23:33:33.239888195 +0000 UTC m=+0.074164676 container create 362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6 (image=quay.io/ceph/ceph:v18, name=ecstatic_diffie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 18:33:33 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:33 np0005535838 systemd[1]: Started libpod-conmon-362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6.scope.
Nov 25 18:33:33 np0005535838 podman[98621]: 2025-11-25 23:33:33.274349974 +0000 UTC m=+0.166461905 container init a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:33 np0005535838 podman[98621]: 2025-11-25 23:33:33.286099121 +0000 UTC m=+0.178211002 container start a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goodall, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 18:33:33 np0005535838 podman[98621]: 2025-11-25 23:33:33.28992809 +0000 UTC m=+0.182040021 container attach a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 18:33:33 np0005535838 romantic_goodall[98650]: 167 167
Nov 25 18:33:33 np0005535838 podman[98621]: 2025-11-25 23:33:33.294401238 +0000 UTC m=+0.186513129 container died a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goodall, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:33 np0005535838 podman[98636]: 2025-11-25 23:33:33.21097358 +0000 UTC m=+0.045250111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:33 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:33 np0005535838 systemd[1]: libpod-a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017.scope: Deactivated successfully.
Nov 25 18:33:33 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7d4e1d1b549ee04c4359a2b3a3507bc50221780b185c3d88d58962e7fd4e5f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:33 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7d4e1d1b549ee04c4359a2b3a3507bc50221780b185c3d88d58962e7fd4e5f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:33 np0005535838 podman[98636]: 2025-11-25 23:33:33.334231257 +0000 UTC m=+0.168507818 container init 362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6 (image=quay.io/ceph/ceph:v18, name=ecstatic_diffie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:33:33 np0005535838 podman[98636]: 2025-11-25 23:33:33.344920496 +0000 UTC m=+0.179196947 container start 362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6 (image=quay.io/ceph/ceph:v18, name=ecstatic_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:33 np0005535838 systemd[1]: var-lib-containers-storage-overlay-ad4ce7145cf7f8415eb28ab8d25e616d85828e021e1a10c89c25afbe0373ccb6-merged.mount: Deactivated successfully.
Nov 25 18:33:33 np0005535838 podman[98636]: 2025-11-25 23:33:33.348370376 +0000 UTC m=+0.182646867 container attach 362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6 (image=quay.io/ceph/ceph:v18, name=ecstatic_diffie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:33:33 np0005535838 podman[98621]: 2025-11-25 23:33:33.367603058 +0000 UTC m=+0.259714919 container remove a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goodall, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:33 np0005535838 systemd[1]: libpod-conmon-a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017.scope: Deactivated successfully.
Nov 25 18:33:33 np0005535838 podman[98680]: 2025-11-25 23:33:33.568272235 +0000 UTC m=+0.057139403 container create 6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 18:33:33 np0005535838 systemd[1]: Started libpod-conmon-6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81.scope.
Nov 25 18:33:33 np0005535838 podman[98680]: 2025-11-25 23:33:33.537929103 +0000 UTC m=+0.026796311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:33 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:33 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c011b01fe6fb4f3201ff7b278fd316d74272d2326d9d7bf3ec9ecab05cb71f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:33 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c011b01fe6fb4f3201ff7b278fd316d74272d2326d9d7bf3ec9ecab05cb71f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:33 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c011b01fe6fb4f3201ff7b278fd316d74272d2326d9d7bf3ec9ecab05cb71f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:33 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c011b01fe6fb4f3201ff7b278fd316d74272d2326d9d7bf3ec9ecab05cb71f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:33 np0005535838 podman[98680]: 2025-11-25 23:33:33.655844471 +0000 UTC m=+0.144711679 container init 6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 18:33:33 np0005535838 podman[98680]: 2025-11-25 23:33:33.668729467 +0000 UTC m=+0.157596625 container start 6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 18:33:33 np0005535838 podman[98680]: 2025-11-25 23:33:33.672252228 +0000 UTC m=+0.161119406 container attach 6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:33 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 18:33:33 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1389221647' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 18:33:33 np0005535838 ecstatic_diffie[98655]: 
Nov 25 18:33:33 np0005535838 ecstatic_diffie[98655]: {"fsid":"101922db-575f-58e2-980f-928050464f69","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":143,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":32,"num_osds":3,"num_up_osds":3,"osd_up_since":1764113585,"num_in_osds":3,"osd_in_since":1764113559,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83849216,"bytes_avail":64328077312,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-25T23:32:57.989925+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 25 18:33:33 np0005535838 systemd[1]: libpod-362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6.scope: Deactivated successfully.
Nov 25 18:33:33 np0005535838 podman[98636]: 2025-11-25 23:33:33.955983633 +0000 UTC m=+0.790260084 container died 362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6 (image=quay.io/ceph/ceph:v18, name=ecstatic_diffie, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:33 np0005535838 systemd[1]: var-lib-containers-storage-overlay-5a7d4e1d1b549ee04c4359a2b3a3507bc50221780b185c3d88d58962e7fd4e5f-merged.mount: Deactivated successfully.
Nov 25 18:33:33 np0005535838 podman[98636]: 2025-11-25 23:33:33.996518571 +0000 UTC m=+0.830795022 container remove 362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6 (image=quay.io/ceph/ceph:v18, name=ecstatic_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 18:33:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:34 np0005535838 systemd[1]: libpod-conmon-362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6.scope: Deactivated successfully.
Nov 25 18:33:34 np0005535838 python3[98760]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:34 np0005535838 podman[98763]: 2025-11-25 23:33:34.41297657 +0000 UTC m=+0.052716647 container create 4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544 (image=quay.io/ceph/ceph:v18, name=zealous_moore, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 18:33:34 np0005535838 systemd[1]: Started libpod-conmon-4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544.scope.
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]: {
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:    "0": [
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:        {
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "devices": [
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "/dev/loop3"
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            ],
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_name": "ceph_lv0",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_size": "21470642176",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "name": "ceph_lv0",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "tags": {
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.crush_device_class": "",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.encrypted": "0",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.osd_id": "0",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.type": "block",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.vdo": "0"
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            },
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "type": "block",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "vg_name": "ceph_vg0"
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:        }
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:    ],
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:    "1": [
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:        {
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "devices": [
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "/dev/loop4"
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            ],
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_name": "ceph_lv1",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_size": "21470642176",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "name": "ceph_lv1",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "tags": {
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.crush_device_class": "",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.encrypted": "0",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.osd_id": "1",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.type": "block",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.vdo": "0"
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            },
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "type": "block",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "vg_name": "ceph_vg1"
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:        }
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:    ],
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:    "2": [
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:        {
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "devices": [
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "/dev/loop5"
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            ],
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_name": "ceph_lv2",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_size": "21470642176",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "name": "ceph_lv2",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "tags": {
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.crush_device_class": "",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.encrypted": "0",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.osd_id": "2",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.type": "block",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:                "ceph.vdo": "0"
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            },
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "type": "block",
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:            "vg_name": "ceph_vg2"
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:        }
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]:    ]
Nov 25 18:33:34 np0005535838 vigilant_dijkstra[98696]: }
Nov 25 18:33:34 np0005535838 podman[98763]: 2025-11-25 23:33:34.384868406 +0000 UTC m=+0.024608483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:34 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7021109aa2c27b0b6334d360ba48fe38b57361c8727f182ec604e1573f293a2a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7021109aa2c27b0b6334d360ba48fe38b57361c8727f182ec604e1573f293a2a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:34 np0005535838 systemd[1]: libpod-6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81.scope: Deactivated successfully.
Nov 25 18:33:34 np0005535838 podman[98680]: 2025-11-25 23:33:34.514354996 +0000 UTC m=+1.003222184 container died 6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 18:33:34 np0005535838 podman[98763]: 2025-11-25 23:33:34.528924486 +0000 UTC m=+0.168664563 container init 4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544 (image=quay.io/ceph/ceph:v18, name=zealous_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 18:33:34 np0005535838 podman[98763]: 2025-11-25 23:33:34.539976744 +0000 UTC m=+0.179716801 container start 4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544 (image=quay.io/ceph/ceph:v18, name=zealous_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 18:33:34 np0005535838 podman[98763]: 2025-11-25 23:33:34.544671006 +0000 UTC m=+0.184411133 container attach 4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544 (image=quay.io/ceph/ceph:v18, name=zealous_moore, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:34 np0005535838 systemd[1]: var-lib-containers-storage-overlay-28c011b01fe6fb4f3201ff7b278fd316d74272d2326d9d7bf3ec9ecab05cb71f-merged.mount: Deactivated successfully.
Nov 25 18:33:34 np0005535838 podman[98680]: 2025-11-25 23:33:34.58154865 +0000 UTC m=+1.070415848 container remove 6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:34 np0005535838 systemd[1]: libpod-conmon-6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81.scope: Deactivated successfully.
Nov 25 18:33:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 18:33:35 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3636941601' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 18:33:35 np0005535838 zealous_moore[98780]: 
Nov 25 18:33:35 np0005535838 zealous_moore[98780]: {"epoch":1,"fsid":"101922db-575f-58e2-980f-928050464f69","modified":"2025-11-25T23:31:04.907397Z","created":"2025-11-25T23:31:04.907397Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 25 18:33:35 np0005535838 zealous_moore[98780]: dumped monmap epoch 1
Nov 25 18:33:35 np0005535838 systemd[1]: libpod-4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544.scope: Deactivated successfully.
Nov 25 18:33:35 np0005535838 podman[98763]: 2025-11-25 23:33:35.117640071 +0000 UTC m=+0.757380158 container died 4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544 (image=quay.io/ceph/ceph:v18, name=zealous_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:35 np0005535838 systemd[1]: var-lib-containers-storage-overlay-7021109aa2c27b0b6334d360ba48fe38b57361c8727f182ec604e1573f293a2a-merged.mount: Deactivated successfully.
Nov 25 18:33:35 np0005535838 podman[98763]: 2025-11-25 23:33:35.176294841 +0000 UTC m=+0.816034948 container remove 4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544 (image=quay.io/ceph/ceph:v18, name=zealous_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:33:35 np0005535838 systemd[1]: libpod-conmon-4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544.scope: Deactivated successfully.
Nov 25 18:33:35 np0005535838 podman[98968]: 2025-11-25 23:33:35.415398961 +0000 UTC m=+0.063589680 container create 029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:35 np0005535838 systemd[1]: Started libpod-conmon-029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83.scope.
Nov 25 18:33:35 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:35 np0005535838 podman[98968]: 2025-11-25 23:33:35.400243796 +0000 UTC m=+0.048434535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:35 np0005535838 podman[98968]: 2025-11-25 23:33:35.494245139 +0000 UTC m=+0.142435878 container init 029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:33:35 np0005535838 podman[98968]: 2025-11-25 23:33:35.501725964 +0000 UTC m=+0.149916683 container start 029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 18:33:35 np0005535838 podman[98968]: 2025-11-25 23:33:35.505541664 +0000 UTC m=+0.153732483 container attach 029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:35 np0005535838 sweet_pare[98985]: 167 167
Nov 25 18:33:35 np0005535838 systemd[1]: libpod-029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83.scope: Deactivated successfully.
Nov 25 18:33:35 np0005535838 podman[98968]: 2025-11-25 23:33:35.507139176 +0000 UTC m=+0.155329895 container died 029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:35 np0005535838 systemd[1]: var-lib-containers-storage-overlay-c838371db4d7157eeec4ddd6e9863561b0e3ecd8e41b54cd0a6e66ad4050f82c-merged.mount: Deactivated successfully.
Nov 25 18:33:35 np0005535838 podman[98968]: 2025-11-25 23:33:35.556156225 +0000 UTC m=+0.204346944 container remove 029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:35 np0005535838 systemd[1]: libpod-conmon-029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83.scope: Deactivated successfully.
Nov 25 18:33:35 np0005535838 podman[99035]: 2025-11-25 23:33:35.765046586 +0000 UTC m=+0.043352152 container create c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:35 np0005535838 systemd[1]: Started libpod-conmon-c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735.scope.
Nov 25 18:33:35 np0005535838 podman[99035]: 2025-11-25 23:33:35.744671025 +0000 UTC m=+0.022976581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:35 np0005535838 python3[99029]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:35 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:35 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5e9b3fa028cc97b6805d8e0cadab1cfc6e31ff24a57ab9c2a223cb7b754ca3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:35 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5e9b3fa028cc97b6805d8e0cadab1cfc6e31ff24a57ab9c2a223cb7b754ca3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:35 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5e9b3fa028cc97b6805d8e0cadab1cfc6e31ff24a57ab9c2a223cb7b754ca3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:35 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5e9b3fa028cc97b6805d8e0cadab1cfc6e31ff24a57ab9c2a223cb7b754ca3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:35 np0005535838 podman[99035]: 2025-11-25 23:33:35.891607319 +0000 UTC m=+0.169912915 container init c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:35 np0005535838 podman[99035]: 2025-11-25 23:33:35.906430886 +0000 UTC m=+0.184736462 container start c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:33:35 np0005535838 podman[99035]: 2025-11-25 23:33:35.911065307 +0000 UTC m=+0.189370943 container attach c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 18:33:35 np0005535838 podman[99054]: 2025-11-25 23:33:35.925791222 +0000 UTC m=+0.061830765 container create bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e (image=quay.io/ceph/ceph:v18, name=gallant_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:35 np0005535838 systemd[1]: Started libpod-conmon-bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e.scope.
Nov 25 18:33:35 np0005535838 podman[99054]: 2025-11-25 23:33:35.902711949 +0000 UTC m=+0.038751512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:36 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:36 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4edcfe5b168b08b3daa6d5dc5e6de882ab41de9dce439fdb8f70a14d82a1f599/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:36 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4edcfe5b168b08b3daa6d5dc5e6de882ab41de9dce439fdb8f70a14d82a1f599/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:36 np0005535838 podman[99054]: 2025-11-25 23:33:36.025115334 +0000 UTC m=+0.161154917 container init bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e (image=quay.io/ceph/ceph:v18, name=gallant_mcnulty, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:33:36 np0005535838 podman[99054]: 2025-11-25 23:33:36.034906219 +0000 UTC m=+0.170945772 container start bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e (image=quay.io/ceph/ceph:v18, name=gallant_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:36 np0005535838 podman[99054]: 2025-11-25 23:33:36.038443771 +0000 UTC m=+0.174483424 container attach bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e (image=quay.io/ceph/ceph:v18, name=gallant_mcnulty, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 25 18:33:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1958623184' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 25 18:33:36 np0005535838 gallant_mcnulty[99071]: [client.openstack]
Nov 25 18:33:36 np0005535838 gallant_mcnulty[99071]: #011key = AQAfPCZpAAAAABAAikUZSrYMJ3qAPbvPGOplUw==
Nov 25 18:33:36 np0005535838 gallant_mcnulty[99071]: #011caps mgr = "allow *"
Nov 25 18:33:36 np0005535838 gallant_mcnulty[99071]: #011caps mon = "profile rbd"
Nov 25 18:33:36 np0005535838 gallant_mcnulty[99071]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 25 18:33:36 np0005535838 systemd[1]: libpod-bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e.scope: Deactivated successfully.
Nov 25 18:33:36 np0005535838 podman[99054]: 2025-11-25 23:33:36.629795685 +0000 UTC m=+0.765835238 container died bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e (image=quay.io/ceph/ceph:v18, name=gallant_mcnulty, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:36 np0005535838 systemd[1]: var-lib-containers-storage-overlay-4edcfe5b168b08b3daa6d5dc5e6de882ab41de9dce439fdb8f70a14d82a1f599-merged.mount: Deactivated successfully.
Nov 25 18:33:36 np0005535838 podman[99054]: 2025-11-25 23:33:36.68438943 +0000 UTC m=+0.820428983 container remove bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e (image=quay.io/ceph/ceph:v18, name=gallant_mcnulty, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:36 np0005535838 systemd[1]: libpod-conmon-bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e.scope: Deactivated successfully.
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]: {
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "osd_id": 2,
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "type": "bluestore"
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:    },
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "osd_id": 1,
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "type": "bluestore"
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:    },
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "osd_id": 0,
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:        "type": "bluestore"
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]:    }
Nov 25 18:33:36 np0005535838 frosty_merkle[99051]: }
Nov 25 18:33:36 np0005535838 systemd[1]: libpod-c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735.scope: Deactivated successfully.
Nov 25 18:33:36 np0005535838 podman[99136]: 2025-11-25 23:33:36.925132733 +0000 UTC m=+0.041028693 container died c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 18:33:36 np0005535838 systemd[1]: var-lib-containers-storage-overlay-8c5e9b3fa028cc97b6805d8e0cadab1cfc6e31ff24a57ab9c2a223cb7b754ca3-merged.mount: Deactivated successfully.
Nov 25 18:33:36 np0005535838 podman[99136]: 2025-11-25 23:33:36.983524446 +0000 UTC m=+0.099420386 container remove c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 18:33:36 np0005535838 systemd[1]: libpod-conmon-c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735.scope: Deactivated successfully.
Nov 25 18:33:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:37 np0005535838 ceph-mgr[75954]: [progress INFO root] update: starting ev 343aa52f-96fb-47a1-88c6-7d88c4dd8423 (Updating mds.cephfs deployment (+1 -> 1))
Nov 25 18:33:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgauhq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 25 18:33:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgauhq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 18:33:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgauhq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 18:33:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:33:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:33:37 np0005535838 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.bgauhq on compute-0
Nov 25 18:33:37 np0005535838 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.bgauhq on compute-0
Nov 25 18:33:37 np0005535838 ceph-mon[75654]: from='client.? 192.168.122.100:0/1958623184' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 25 18:33:37 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:37 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:37 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgauhq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 18:33:37 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgauhq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 18:33:37 np0005535838 podman[99294]: 2025-11-25 23:33:37.665761571 +0000 UTC m=+0.042399497 container create 235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bell, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:37 np0005535838 systemd[1]: Started libpod-conmon-235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54.scope.
Nov 25 18:33:37 np0005535838 podman[99294]: 2025-11-25 23:33:37.645565687 +0000 UTC m=+0.022203643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:37 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:37 np0005535838 podman[99294]: 2025-11-25 23:33:37.773571694 +0000 UTC m=+0.150209620 container init 235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:33:37 np0005535838 podman[99294]: 2025-11-25 23:33:37.784265704 +0000 UTC m=+0.160903660 container start 235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:33:37 np0005535838 podman[99294]: 2025-11-25 23:33:37.788488897 +0000 UTC m=+0.165126843 container attach 235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:37 np0005535838 wonderful_bell[99318]: 167 167
Nov 25 18:33:37 np0005535838 systemd[1]: libpod-235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54.scope: Deactivated successfully.
Nov 25 18:33:37 np0005535838 podman[99294]: 2025-11-25 23:33:37.790755563 +0000 UTC m=+0.167393519 container died 235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 18:33:37 np0005535838 systemd[1]: var-lib-containers-storage-overlay-3e415b742c9a48c8f006850b4cc2b077e4ca35e5985266a3d5378ff4abe3838b-merged.mount: Deactivated successfully.
Nov 25 18:33:37 np0005535838 podman[99294]: 2025-11-25 23:33:37.832693247 +0000 UTC m=+0.209331203 container remove 235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 18:33:37 np0005535838 systemd[1]: libpod-conmon-235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54.scope: Deactivated successfully.
Nov 25 18:33:37 np0005535838 systemd[1]: Reloading.
Nov 25 18:33:37 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:33:37 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:33:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:38 np0005535838 systemd[1]: Reloading.
Nov 25 18:33:38 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:33:38 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:33:38 np0005535838 ceph-mon[75654]: Deploying daemon mds.cephfs.compute-0.bgauhq on compute-0
Nov 25 18:33:38 np0005535838 systemd[1]: Starting Ceph mds.cephfs.compute-0.bgauhq for 101922db-575f-58e2-980f-928050464f69...
Nov 25 18:33:38 np0005535838 ansible-async_wrapper.py[99556]: Invoked with j119083935891 30 /home/zuul/.ansible/tmp/ansible-tmp-1764113617.772573-36887-60033739649088/AnsiballZ_command.py _
Nov 25 18:33:38 np0005535838 ansible-async_wrapper.py[99585]: Starting module and watcher
Nov 25 18:33:38 np0005535838 ansible-async_wrapper.py[99585]: Start watching 99586 (30)
Nov 25 18:33:38 np0005535838 ansible-async_wrapper.py[99586]: Start module (99586)
Nov 25 18:33:38 np0005535838 ansible-async_wrapper.py[99556]: Return async_wrapper task started.
Nov 25 18:33:38 np0005535838 python3[99587]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:38 np0005535838 podman[99611]: 2025-11-25 23:33:38.829586542 +0000 UTC m=+0.054093483 container create 6b229c37a0bc838d3ecf26a033cf171f9949458f3108f561377ced7bcf58e37f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mds-cephfs-compute-0-bgauhq, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4294b571599acfa933f696cc4337fcd86d2118bd3bdd123445aa1a56bba7651f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4294b571599acfa933f696cc4337fcd86d2118bd3bdd123445aa1a56bba7651f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4294b571599acfa933f696cc4337fcd86d2118bd3bdd123445aa1a56bba7651f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4294b571599acfa933f696cc4337fcd86d2118bd3bdd123445aa1a56bba7651f/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.bgauhq supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:38 np0005535838 podman[99611]: 2025-11-25 23:33:38.801588128 +0000 UTC m=+0.026095159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:38 np0005535838 podman[99611]: 2025-11-25 23:33:38.911222165 +0000 UTC m=+0.135729186 container init 6b229c37a0bc838d3ecf26a033cf171f9949458f3108f561377ced7bcf58e37f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mds-cephfs-compute-0-bgauhq, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 18:33:38 np0005535838 podman[99611]: 2025-11-25 23:33:38.915710834 +0000 UTC m=+0.140217815 container start 6b229c37a0bc838d3ecf26a033cf171f9949458f3108f561377ced7bcf58e37f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mds-cephfs-compute-0-bgauhq, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 18:33:38 np0005535838 bash[99611]: 6b229c37a0bc838d3ecf26a033cf171f9949458f3108f561377ced7bcf58e37f
Nov 25 18:33:38 np0005535838 podman[99624]: 2025-11-25 23:33:38.926476508 +0000 UTC m=+0.071896507 container create 68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2 (image=quay.io/ceph/ceph:v18, name=vibrant_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:38 np0005535838 systemd[1]: Started Ceph mds.cephfs.compute-0.bgauhq for 101922db-575f-58e2-980f-928050464f69.
Nov 25 18:33:38 np0005535838 ceph-mds[99641]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 18:33:38 np0005535838 ceph-mds[99641]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 25 18:33:38 np0005535838 ceph-mds[99641]: main not setting numa affinity
Nov 25 18:33:38 np0005535838 ceph-mds[99641]: pidfile_write: ignore empty --pid-file
Nov 25 18:33:38 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mds-cephfs-compute-0-bgauhq[99635]: starting mds.cephfs.compute-0.bgauhq at 
Nov 25 18:33:38 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:38 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq Updating MDS map to version 2 from mon.0
Nov 25 18:33:38 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:38 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:38 np0005535838 systemd[1]: Started libpod-conmon-68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2.scope.
Nov 25 18:33:38 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:38 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 18:33:38 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:38 np0005535838 ceph-mgr[75954]: [progress INFO root] complete: finished ev 343aa52f-96fb-47a1-88c6-7d88c4dd8423 (Updating mds.cephfs deployment (+1 -> 1))
Nov 25 18:33:38 np0005535838 ceph-mgr[75954]: [progress INFO root] Completed event 343aa52f-96fb-47a1-88c6-7d88c4dd8423 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Nov 25 18:33:38 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 25 18:33:38 np0005535838 podman[99624]: 2025-11-25 23:33:38.89833258 +0000 UTC m=+0.043752669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:38 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:38 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 18:33:38 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:39 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:39 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5bccdc04fc614844d60f4975be1ede26c0c2e2aba5932a60a0eb6ea2852520/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:39 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5bccdc04fc614844d60f4975be1ede26c0c2e2aba5932a60a0eb6ea2852520/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:39 np0005535838 podman[99624]: 2025-11-25 23:33:39.035237374 +0000 UTC m=+0.180657433 container init 68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2 (image=quay.io/ceph/ceph:v18, name=vibrant_einstein, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:39 np0005535838 podman[99624]: 2025-11-25 23:33:39.047288498 +0000 UTC m=+0.192708537 container start 68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2 (image=quay.io/ceph/ceph:v18, name=vibrant_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:39 np0005535838 podman[99624]: 2025-11-25 23:33:39.053419238 +0000 UTC m=+0.198839317 container attach 68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2 (image=quay.io/ceph/ceph:v18, name=vibrant_einstein, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:39 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14254 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 18:33:39 np0005535838 vibrant_einstein[99665]: 
Nov 25 18:33:39 np0005535838 vibrant_einstein[99665]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 18:33:39 np0005535838 systemd[1]: libpod-68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2.scope: Deactivated successfully.
Nov 25 18:33:39 np0005535838 podman[99624]: 2025-11-25 23:33:39.614013657 +0000 UTC m=+0.759433676 container died 68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2 (image=quay.io/ceph/ceph:v18, name=vibrant_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 18:33:39 np0005535838 systemd[1]: var-lib-containers-storage-overlay-6c5bccdc04fc614844d60f4975be1ede26c0c2e2aba5932a60a0eb6ea2852520-merged.mount: Deactivated successfully.
Nov 25 18:33:39 np0005535838 podman[99624]: 2025-11-25 23:33:39.677883398 +0000 UTC m=+0.823303427 container remove 68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2 (image=quay.io/ceph/ceph:v18, name=vibrant_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 18:33:39 np0005535838 systemd[1]: libpod-conmon-68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2.scope: Deactivated successfully.
Nov 25 18:33:39 np0005535838 ansible-async_wrapper.py[99586]: Module complete (99586)
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).mds e3 new map
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T23:33:28.179667+0000#012modified#0112025-11-25T23:33:28.179713+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.bgauhq{-1:14252} state up:standby seq 1 addr [v2:192.168.122.100:6814/1816913204,v1:192.168.122.100:6815/1816913204] compat {c=[1],r=[1],i=[7ff]}]
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq Updating MDS map to version 3 from mon.0
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq Monitors have assigned me to become a standby.
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1816913204,v1:192.168.122.100:6815/1816913204] up:boot
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/1816913204,v1:192.168.122.100:6815/1816913204] as mds.0
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bgauhq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.bgauhq"} v 0) v1
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.bgauhq"}]: dispatch
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).mds e3 all = 0
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).mds e4 new map
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T23:33:28.179667+0000#012modified#0112025-11-25T23:33:39.976596+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14252}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.bgauhq{0:14252} state up:creating seq 1 addr [v2:192.168.122.100:6814/1816913204,v1:192.168.122.100:6815/1816913204] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 25 18:33:39 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bgauhq=up:creating}
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq Updating MDS map to version 4 from mon.0
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x1
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x100
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x600
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x601
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x602
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x603
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x604
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x605
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x606
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x607
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x608
Nov 25 18:33:39 np0005535838 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x609
Nov 25 18:33:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:40 np0005535838 ceph-mds[99641]: mds.0.4 creating_done
Nov 25 18:33:40 np0005535838 ceph-mon[75654]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bgauhq is now active in filesystem cephfs as rank 0
Nov 25 18:33:40 np0005535838 python3[99939]: ansible-ansible.legacy.async_status Invoked with jid=j119083935891.99556 mode=status _async_dir=/root/.ansible_async
Nov 25 18:33:40 np0005535838 podman[99982]: 2025-11-25 23:33:40.165570687 +0000 UTC m=+0.073299221 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 18:33:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:33:40 np0005535838 podman[99982]: 2025-11-25 23:33:40.314543734 +0000 UTC m=+0.222272148 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:40 np0005535838 python3[100050]: ansible-ansible.legacy.async_status Invoked with jid=j119083935891.99556 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 18:33:40 np0005535838 ceph-mon[75654]: daemon mds.cephfs.compute-0.bgauhq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 25 18:33:40 np0005535838 ceph-mon[75654]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 25 18:33:40 np0005535838 ceph-mon[75654]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 25 18:33:40 np0005535838 ceph-mon[75654]: Cluster is now healthy
Nov 25 18:33:40 np0005535838 ceph-mon[75654]: daemon mds.cephfs.compute-0.bgauhq is now active in filesystem cephfs as rank 0
Nov 25 18:33:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).mds e5 new map
Nov 25 18:33:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T23:33:28.179667+0000#012modified#0112025-11-25T23:33:40.984330+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14252}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.bgauhq{0:14252} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1816913204,v1:192.168.122.100:6815/1816913204] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 25 18:33:40 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq Updating MDS map to version 5 from mon.0
Nov 25 18:33:40 np0005535838 ceph-mds[99641]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 25 18:33:40 np0005535838 ceph-mds[99641]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 25 18:33:40 np0005535838 ceph-mds[99641]: mds.0.4 recovery_done -- successful recovery!
Nov 25 18:33:40 np0005535838 ceph-mds[99641]: mds.0.4 active_start
Nov 25 18:33:40 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1816913204,v1:192.168.122.100:6815/1816913204] up:active
Nov 25 18:33:40 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bgauhq=up:active}
Nov 25 18:33:40 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:41 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev ac0906c8-6b00-499f-ae50-1fb05cac968c does not exist
Nov 25 18:33:41 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 2abc0533-6a99-4a7e-ab93-a86cbaf900c4 does not exist
Nov 25 18:33:41 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 298aa286-44c3-4080-b22a-8342d0fd1a79 does not exist
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:33:41 np0005535838 ceph-mgr[75954]: [progress INFO root] Writing back 4 completed events
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 18:33:41 np0005535838 python3[100196]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:41 np0005535838 podman[100223]: 2025-11-25 23:33:41.170235342 +0000 UTC m=+0.050682670 container create 7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d (image=quay.io/ceph/ceph:v18, name=intelligent_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 18:33:41 np0005535838 systemd[1]: Started libpod-conmon-7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d.scope.
Nov 25 18:33:41 np0005535838 podman[100223]: 2025-11-25 23:33:41.153033341 +0000 UTC m=+0.033480659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:41 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:41 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d918b341a5630ffb48c3b70ff4b41be27c4f27786a539ca97a5eb8ced76714d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:41 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d918b341a5630ffb48c3b70ff4b41be27c4f27786a539ca97a5eb8ced76714d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:41 np0005535838 podman[100223]: 2025-11-25 23:33:41.271232118 +0000 UTC m=+0.151679456 container init 7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d (image=quay.io/ceph/ceph:v18, name=intelligent_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:41 np0005535838 podman[100223]: 2025-11-25 23:33:41.276690081 +0000 UTC m=+0.157137419 container start 7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d (image=quay.io/ceph/ceph:v18, name=intelligent_raman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 18:33:41 np0005535838 podman[100223]: 2025-11-25 23:33:41.28236423 +0000 UTC m=+0.162811578 container attach 7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d (image=quay.io/ceph/ceph:v18, name=intelligent_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:41 np0005535838 podman[100376]: 2025-11-25 23:33:41.723346179 +0000 UTC m=+0.058820428 container create 64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_matsumoto, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:41 np0005535838 systemd[1]: Started libpod-conmon-64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1.scope.
Nov 25 18:33:41 np0005535838 podman[100376]: 2025-11-25 23:33:41.693401387 +0000 UTC m=+0.028875636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:41 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:41 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 18:33:41 np0005535838 intelligent_raman[100271]: 
Nov 25 18:33:41 np0005535838 intelligent_raman[100271]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 18:33:41 np0005535838 podman[100376]: 2025-11-25 23:33:41.812317471 +0000 UTC m=+0.147791720 container init 64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 18:33:41 np0005535838 systemd[1]: libpod-7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d.scope: Deactivated successfully.
Nov 25 18:33:41 np0005535838 podman[100223]: 2025-11-25 23:33:41.816281708 +0000 UTC m=+0.696729036 container died 7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d (image=quay.io/ceph/ceph:v18, name=intelligent_raman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 18:33:41 np0005535838 podman[100376]: 2025-11-25 23:33:41.825258007 +0000 UTC m=+0.160732256 container start 64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_matsumoto, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:41 np0005535838 podman[100376]: 2025-11-25 23:33:41.828830734 +0000 UTC m=+0.164305043 container attach 64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_matsumoto, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:41 np0005535838 busy_matsumoto[100392]: 167 167
Nov 25 18:33:41 np0005535838 systemd[1]: libpod-64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1.scope: Deactivated successfully.
Nov 25 18:33:41 np0005535838 podman[100376]: 2025-11-25 23:33:41.833717403 +0000 UTC m=+0.169191652 container died 64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_matsumoto, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:41 np0005535838 systemd[1]: var-lib-containers-storage-overlay-9d918b341a5630ffb48c3b70ff4b41be27c4f27786a539ca97a5eb8ced76714d-merged.mount: Deactivated successfully.
Nov 25 18:33:41 np0005535838 podman[100223]: 2025-11-25 23:33:41.878898427 +0000 UTC m=+0.759345755 container remove 7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d (image=quay.io/ceph/ceph:v18, name=intelligent_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:41 np0005535838 systemd[1]: var-lib-containers-storage-overlay-2ee2873269a2001c4e694ec1fc085f31c51b0c9a1ab0c0081d4026a1612edbac-merged.mount: Deactivated successfully.
Nov 25 18:33:41 np0005535838 systemd[1]: libpod-conmon-7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d.scope: Deactivated successfully.
Nov 25 18:33:41 np0005535838 podman[100376]: 2025-11-25 23:33:41.922441811 +0000 UTC m=+0.257916040 container remove 64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:41 np0005535838 systemd[1]: libpod-conmon-64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1.scope: Deactivated successfully.
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:33:41 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:42 np0005535838 podman[100428]: 2025-11-25 23:33:42.148688975 +0000 UTC m=+0.066963205 container create 6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 18:33:42 np0005535838 systemd[1]: Started libpod-conmon-6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9.scope.
Nov 25 18:33:42 np0005535838 podman[100428]: 2025-11-25 23:33:42.12187714 +0000 UTC m=+0.040151420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:42 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd17c32f19ca499b4835b86a1c94693069af9254b0e08220a6d33d66ee9b5dbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd17c32f19ca499b4835b86a1c94693069af9254b0e08220a6d33d66ee9b5dbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd17c32f19ca499b4835b86a1c94693069af9254b0e08220a6d33d66ee9b5dbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd17c32f19ca499b4835b86a1c94693069af9254b0e08220a6d33d66ee9b5dbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd17c32f19ca499b4835b86a1c94693069af9254b0e08220a6d33d66ee9b5dbd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:42 np0005535838 podman[100428]: 2025-11-25 23:33:42.283646131 +0000 UTC m=+0.201920411 container init 6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:42 np0005535838 podman[100428]: 2025-11-25 23:33:42.295958802 +0000 UTC m=+0.214233042 container start 6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:42 np0005535838 podman[100428]: 2025-11-25 23:33:42.300196496 +0000 UTC m=+0.218470746 container attach 6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 25 18:33:42 np0005535838 python3[100475]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:42 np0005535838 podman[100476]: 2025-11-25 23:33:42.925097996 +0000 UTC m=+0.063065011 container create b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c (image=quay.io/ceph/ceph:v18, name=trusting_lewin, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:42 np0005535838 systemd[1]: Started libpod-conmon-b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c.scope.
Nov 25 18:33:42 np0005535838 podman[100476]: 2025-11-25 23:33:42.900378082 +0000 UTC m=+0.038345137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:43 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:43 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76dc89a677651595fc7596504a6a280aa28849e37c903471e1795359f28901ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:43 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76dc89a677651595fc7596504a6a280aa28849e37c903471e1795359f28901ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:43 np0005535838 podman[100476]: 2025-11-25 23:33:43.027783363 +0000 UTC m=+0.165750458 container init b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c (image=quay.io/ceph/ceph:v18, name=trusting_lewin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:43 np0005535838 podman[100476]: 2025-11-25 23:33:43.039618273 +0000 UTC m=+0.177585318 container start b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c (image=quay.io/ceph/ceph:v18, name=trusting_lewin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:43 np0005535838 podman[100476]: 2025-11-25 23:33:43.042906163 +0000 UTC m=+0.180873208 container attach b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c (image=quay.io/ceph/ceph:v18, name=trusting_lewin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 18:33:43 np0005535838 eager_kalam[100445]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:33:43 np0005535838 eager_kalam[100445]: --> relative data size: 1.0
Nov 25 18:33:43 np0005535838 eager_kalam[100445]: --> All data devices are unavailable
Nov 25 18:33:43 np0005535838 systemd[1]: libpod-6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9.scope: Deactivated successfully.
Nov 25 18:33:43 np0005535838 podman[100428]: 2025-11-25 23:33:43.406390689 +0000 UTC m=+1.324664889 container died 6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:43 np0005535838 systemd[1]: libpod-6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9.scope: Consumed 1.040s CPU time.
Nov 25 18:33:43 np0005535838 systemd[1]: var-lib-containers-storage-overlay-fd17c32f19ca499b4835b86a1c94693069af9254b0e08220a6d33d66ee9b5dbd-merged.mount: Deactivated successfully.
Nov 25 18:33:43 np0005535838 podman[100428]: 2025-11-25 23:33:43.454236688 +0000 UTC m=+1.372510888 container remove 6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 18:33:43 np0005535838 systemd[1]: libpod-conmon-6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9.scope: Deactivated successfully.
Nov 25 18:33:43 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 18:33:43 np0005535838 trusting_lewin[100495]: 
Nov 25 18:33:43 np0005535838 trusting_lewin[100495]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}]
Nov 25 18:33:43 np0005535838 systemd[1]: libpod-b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c.scope: Deactivated successfully.
Nov 25 18:33:43 np0005535838 podman[100476]: 2025-11-25 23:33:43.661495759 +0000 UTC m=+0.799462774 container died b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c (image=quay.io/ceph/ceph:v18, name=trusting_lewin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:43 np0005535838 ansible-async_wrapper.py[99585]: Done in kid B.
Nov 25 18:33:43 np0005535838 systemd[1]: var-lib-containers-storage-overlay-76dc89a677651595fc7596504a6a280aa28849e37c903471e1795359f28901ab-merged.mount: Deactivated successfully.
Nov 25 18:33:43 np0005535838 podman[100476]: 2025-11-25 23:33:43.699046846 +0000 UTC m=+0.837013861 container remove b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c (image=quay.io/ceph/ceph:v18, name=trusting_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:43 np0005535838 systemd[1]: libpod-conmon-b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c.scope: Deactivated successfully.
Nov 25 18:33:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 18:33:44 np0005535838 podman[100705]: 2025-11-25 23:33:44.060442131 +0000 UTC m=+0.056487351 container create fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_joliot, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 18:33:44 np0005535838 systemd[1]: Started libpod-conmon-fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c.scope.
Nov 25 18:33:44 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:44 np0005535838 podman[100705]: 2025-11-25 23:33:44.042703818 +0000 UTC m=+0.038749058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:44 np0005535838 podman[100705]: 2025-11-25 23:33:44.13821675 +0000 UTC m=+0.134262010 container init fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:44 np0005535838 podman[100705]: 2025-11-25 23:33:44.144582266 +0000 UTC m=+0.140627476 container start fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:44 np0005535838 podman[100705]: 2025-11-25 23:33:44.14884028 +0000 UTC m=+0.144885520 container attach fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_joliot, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:44 np0005535838 sad_joliot[100721]: 167 167
Nov 25 18:33:44 np0005535838 systemd[1]: libpod-fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c.scope: Deactivated successfully.
Nov 25 18:33:44 np0005535838 podman[100705]: 2025-11-25 23:33:44.150442709 +0000 UTC m=+0.146487939 container died fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 18:33:44 np0005535838 systemd[1]: var-lib-containers-storage-overlay-418fb8786eb49df7ba841069155a9aef950f24003c65ef6bb321cdc9f77f31c0-merged.mount: Deactivated successfully.
Nov 25 18:33:44 np0005535838 podman[100705]: 2025-11-25 23:33:44.191390589 +0000 UTC m=+0.187435839 container remove fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:44 np0005535838 systemd[1]: libpod-conmon-fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c.scope: Deactivated successfully.
Nov 25 18:33:44 np0005535838 podman[100745]: 2025-11-25 23:33:44.372858411 +0000 UTC m=+0.037495257 container create e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:44 np0005535838 systemd[1]: Started libpod-conmon-e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632.scope.
Nov 25 18:33:44 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:44 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d84f74cc53deaba3ebdc2fa2d2ef1361a2e80735fc495aaaac579299d11bb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:44 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d84f74cc53deaba3ebdc2fa2d2ef1361a2e80735fc495aaaac579299d11bb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:44 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d84f74cc53deaba3ebdc2fa2d2ef1361a2e80735fc495aaaac579299d11bb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:44 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d84f74cc53deaba3ebdc2fa2d2ef1361a2e80735fc495aaaac579299d11bb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:44 np0005535838 podman[100745]: 2025-11-25 23:33:44.452988987 +0000 UTC m=+0.117625843 container init e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 18:33:44 np0005535838 podman[100745]: 2025-11-25 23:33:44.358536021 +0000 UTC m=+0.023172887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:44 np0005535838 podman[100745]: 2025-11-25 23:33:44.460432829 +0000 UTC m=+0.125069675 container start e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 18:33:44 np0005535838 podman[100745]: 2025-11-25 23:33:44.46293343 +0000 UTC m=+0.127570296 container attach e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 18:33:44 np0005535838 python3[100792]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:44 np0005535838 podman[100793]: 2025-11-25 23:33:44.701735462 +0000 UTC m=+0.036764589 container create ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:44 np0005535838 systemd[1]: Started libpod-conmon-ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f.scope.
Nov 25 18:33:44 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:44 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c35bc2c1aa6742e14dd57843e66050bda32fd28d815a554e80c29f945c8847c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:44 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c35bc2c1aa6742e14dd57843e66050bda32fd28d815a554e80c29f945c8847c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:44 np0005535838 podman[100793]: 2025-11-25 23:33:44.763034469 +0000 UTC m=+0.098063606 container init ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 18:33:44 np0005535838 podman[100793]: 2025-11-25 23:33:44.768460961 +0000 UTC m=+0.103490088 container start ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:44 np0005535838 podman[100793]: 2025-11-25 23:33:44.77169219 +0000 UTC m=+0.106721317 container attach ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:44 np0005535838 podman[100793]: 2025-11-25 23:33:44.68569134 +0000 UTC m=+0.020720487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:33:45 np0005535838 zen_wright[100762]: {
Nov 25 18:33:45 np0005535838 zen_wright[100762]:    "0": [
Nov 25 18:33:45 np0005535838 zen_wright[100762]:        {
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "devices": [
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "/dev/loop3"
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            ],
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_name": "ceph_lv0",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_size": "21470642176",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "name": "ceph_lv0",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "tags": {
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.crush_device_class": "",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.encrypted": "0",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.osd_id": "0",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.type": "block",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.vdo": "0"
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            },
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "type": "block",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "vg_name": "ceph_vg0"
Nov 25 18:33:45 np0005535838 zen_wright[100762]:        }
Nov 25 18:33:45 np0005535838 zen_wright[100762]:    ],
Nov 25 18:33:45 np0005535838 zen_wright[100762]:    "1": [
Nov 25 18:33:45 np0005535838 zen_wright[100762]:        {
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "devices": [
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "/dev/loop4"
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            ],
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_name": "ceph_lv1",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_size": "21470642176",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "name": "ceph_lv1",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "tags": {
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.crush_device_class": "",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.encrypted": "0",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.osd_id": "1",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.type": "block",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.vdo": "0"
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            },
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "type": "block",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "vg_name": "ceph_vg1"
Nov 25 18:33:45 np0005535838 zen_wright[100762]:        }
Nov 25 18:33:45 np0005535838 zen_wright[100762]:    ],
Nov 25 18:33:45 np0005535838 zen_wright[100762]:    "2": [
Nov 25 18:33:45 np0005535838 zen_wright[100762]:        {
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "devices": [
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "/dev/loop5"
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            ],
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_name": "ceph_lv2",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_size": "21470642176",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "name": "ceph_lv2",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "tags": {
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.crush_device_class": "",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.encrypted": "0",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.osd_id": "2",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.type": "block",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:                "ceph.vdo": "0"
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            },
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "type": "block",
Nov 25 18:33:45 np0005535838 zen_wright[100762]:            "vg_name": "ceph_vg2"
Nov 25 18:33:45 np0005535838 zen_wright[100762]:        }
Nov 25 18:33:45 np0005535838 zen_wright[100762]:    ]
Nov 25 18:33:45 np0005535838 zen_wright[100762]: }
Nov 25 18:33:45 np0005535838 systemd[1]: libpod-e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632.scope: Deactivated successfully.
Nov 25 18:33:45 np0005535838 conmon[100762]: conmon e247544de738d30fe81a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632.scope/container/memory.events
Nov 25 18:33:45 np0005535838 podman[100745]: 2025-11-25 23:33:45.233067387 +0000 UTC m=+0.897704233 container died e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:33:45 np0005535838 systemd[1]: var-lib-containers-storage-overlay-64d84f74cc53deaba3ebdc2fa2d2ef1361a2e80735fc495aaaac579299d11bb1-merged.mount: Deactivated successfully.
Nov 25 18:33:45 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 18:33:45 np0005535838 hardcore_mirzakhani[100808]: 
Nov 25 18:33:45 np0005535838 hardcore_mirzakhani[100808]: [{"container_id": "42d7403704ba", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.63%", "created": "2025-11-25T23:32:23.768831Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-25T23:32:23.818771Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T23:33:40.982592Z", "memory_usage": 11607736, "ports": [], "service_name": "crash", "started": "2025-11-25T23:32:23.606904Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-101922db-575f-58e2-980f-928050464f69@crash.compute-0", "version": "18.2.7"}, {"container_id": "6b229c37a0bc", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "5.93%", "created": "2025-11-25T23:33:38.932742Z", "daemon_id": "cephfs.compute-0.bgauhq", "daemon_name": "mds.cephfs.compute-0.bgauhq", "daemon_type": "mds", "events": ["2025-11-25T23:33:38.979298Z daemon:mds.cephfs.compute-0.bgauhq [INFO] \"Deployed mds.cephfs.compute-0.bgauhq on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T23:33:40.982819Z", "memory_usage": 13516144, "ports": [], "service_name": "mds.cephfs", "started": "2025-11-25T23:33:38.809974Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-101922db-575f-58e2-980f-928050464f69@mds.cephfs.compute-0.bgauhq", "version": "18.2.7"}, {"container_id": "cb17cd0be6b6", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "30.06%", "created": "2025-11-25T23:31:12.640654Z", "daemon_id": "compute-0.gwqfsl", "daemon_name": "mgr.compute-0.gwqfsl", "daemon_type": "mgr", "events": ["2025-11-25T23:33:14.755740Z daemon:mgr.compute-0.gwqfsl [INFO] \"Reconfigured mgr.compute-0.gwqfsl on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T23:33:40.982532Z", "memory_usage": 547880960, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-25T23:31:12.511267Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-101922db-575f-58e2-980f-928050464f69@mgr.compute-0.gwqfsl", "version": "18.2.7"}, {"container_id": "42789e176a5d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.28%", "created": "2025-11-25T23:31:07.103509Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-25T23:33:13.882670Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T23:33:40.982456Z", "memory_request": 2147483648, "memory_usage": 37098618, "ports": [], "service_name": "mon", "started": "2025-11-25T23:31:09.966387Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-101922db-575f-58e2-980f-928050464f69@mon.compute-0", "version": "18.2.7"}, {"container_id": "1cdf379c2ca7", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.72%", "created": "2025-11-25T23:32:50.058505Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-25T23:32:50.125877Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T23:33:40.982650Z", "memory_request": 4294967296, "memory_usage": 58709770, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T23:32:49.921521Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-101922db-575f-58e2-980f-928050464f69@osd.0", "version": "18.2.7"}, {"container_id": "210a65a79e01", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.00%", "created": "2025-11-25T23:32:54.628621Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-25T23:32:54.700615Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T23:33:40.982707Z", "memory_request": 4294967296, "memory_usage": 57336135, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T23:32:54.499199Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-101922db-575f-58e2-980f-928050464f69@osd.1", "version": "18.2.7"}, {"container_id": "4adea0c725a0", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.14%", "created": "2025-11-25T23:32:59.635748Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-25T23:32:59.682219Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T23:33:40.982763Z", "memory_request": 4294967296, "memory_usage": 55585013, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T23:32:59.539403Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-101922db-575f-58e2-980f-928050464f69@osd.2", "version": "18.2.7"}]
Nov 25 18:33:45 np0005535838 podman[100745]: 2025-11-25 23:33:45.279645185 +0000 UTC m=+0.944282021 container remove e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 18:33:45 np0005535838 systemd[1]: libpod-conmon-e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632.scope: Deactivated successfully.
Nov 25 18:33:45 np0005535838 systemd[1]: libpod-ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f.scope: Deactivated successfully.
Nov 25 18:33:45 np0005535838 podman[100793]: 2025-11-25 23:33:45.293123944 +0000 UTC m=+0.628153071 container died ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 18:33:45 np0005535838 systemd[1]: var-lib-containers-storage-overlay-c35bc2c1aa6742e14dd57843e66050bda32fd28d815a554e80c29f945c8847c2-merged.mount: Deactivated successfully.
Nov 25 18:33:45 np0005535838 podman[100793]: 2025-11-25 23:33:45.336105253 +0000 UTC m=+0.671134390 container remove ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:45 np0005535838 systemd[1]: libpod-conmon-ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f.scope: Deactivated successfully.
Nov 25 18:33:45 np0005535838 podman[101004]: 2025-11-25 23:33:45.817485509 +0000 UTC m=+0.046611460 container create 9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:45 np0005535838 systemd[1]: Started libpod-conmon-9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d.scope.
Nov 25 18:33:45 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:45 np0005535838 podman[101004]: 2025-11-25 23:33:45.793288668 +0000 UTC m=+0.022414679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:45 np0005535838 podman[101004]: 2025-11-25 23:33:45.900380353 +0000 UTC m=+0.129506354 container init 9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:45 np0005535838 podman[101004]: 2025-11-25 23:33:45.906495802 +0000 UTC m=+0.135621703 container start 9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:45 np0005535838 podman[101004]: 2025-11-25 23:33:45.909324701 +0000 UTC m=+0.138450652 container attach 9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 18:33:45 np0005535838 adoring_poincare[101020]: 167 167
Nov 25 18:33:45 np0005535838 systemd[1]: libpod-9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d.scope: Deactivated successfully.
Nov 25 18:33:45 np0005535838 podman[101004]: 2025-11-25 23:33:45.912768966 +0000 UTC m=+0.141894887 container died 9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:45 np0005535838 systemd[1]: var-lib-containers-storage-overlay-a17f1b74fdd230ab66a069872264c815ffa5a679d517e921658b882ef5352305-merged.mount: Deactivated successfully.
Nov 25 18:33:45 np0005535838 podman[101004]: 2025-11-25 23:33:45.955094169 +0000 UTC m=+0.184220090 container remove 9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:45 np0005535838 systemd[1]: libpod-conmon-9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d.scope: Deactivated successfully.
Nov 25 18:33:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 18:33:46 np0005535838 podman[101044]: 2025-11-25 23:33:46.140455166 +0000 UTC m=+0.059316260 container create dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 18:33:46 np0005535838 systemd[1]: Started libpod-conmon-dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad.scope.
Nov 25 18:33:46 np0005535838 podman[101044]: 2025-11-25 23:33:46.113540809 +0000 UTC m=+0.032401963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:46 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3dc709cbd50f3900818d3f9e76cf0f176dd87599e7c791ebea591b386e08f60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3dc709cbd50f3900818d3f9e76cf0f176dd87599e7c791ebea591b386e08f60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3dc709cbd50f3900818d3f9e76cf0f176dd87599e7c791ebea591b386e08f60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3dc709cbd50f3900818d3f9e76cf0f176dd87599e7c791ebea591b386e08f60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:46 np0005535838 podman[101044]: 2025-11-25 23:33:46.225479692 +0000 UTC m=+0.144340816 container init dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:46 np0005535838 podman[101044]: 2025-11-25 23:33:46.23277874 +0000 UTC m=+0.151639814 container start dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tu, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 18:33:46 np0005535838 podman[101044]: 2025-11-25 23:33:46.235292432 +0000 UTC m=+0.154153546 container attach dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tu, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:46 np0005535838 python3[101087]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:46 np0005535838 podman[101091]: 2025-11-25 23:33:46.391600069 +0000 UTC m=+0.049007968 container create 10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d (image=quay.io/ceph/ceph:v18, name=youthful_yonath, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 18:33:46 np0005535838 systemd[1]: Started libpod-conmon-10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d.scope.
Nov 25 18:33:46 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638e4b05250fb483a9812d6e81a156713fbdffb65b1a0e95ca92487a8e36b669/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638e4b05250fb483a9812d6e81a156713fbdffb65b1a0e95ca92487a8e36b669/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:46 np0005535838 podman[101091]: 2025-11-25 23:33:46.46084265 +0000 UTC m=+0.118250569 container init 10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d (image=quay.io/ceph/ceph:v18, name=youthful_yonath, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:46 np0005535838 podman[101091]: 2025-11-25 23:33:46.369555931 +0000 UTC m=+0.026963880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:46 np0005535838 podman[101091]: 2025-11-25 23:33:46.470545707 +0000 UTC m=+0.127953616 container start 10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d (image=quay.io/ceph/ceph:v18, name=youthful_yonath, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 18:33:46 np0005535838 podman[101091]: 2025-11-25 23:33:46.474735149 +0000 UTC m=+0.132143068 container attach 10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d (image=quay.io/ceph/ceph:v18, name=youthful_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 18:33:47 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 18:33:47 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661170587' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 18:33:47 np0005535838 youthful_yonath[101107]: 
Nov 25 18:33:47 np0005535838 youthful_yonath[101107]: {"fsid":"101922db-575f-58e2-980f-928050464f69","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":156,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":32,"num_osds":3,"num_up_osds":3,"osd_up_since":1764113585,"num_in_osds":3,"osd_in_since":1764113559,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":23,"data_bytes":461642,"bytes_used":83886080,"bytes_avail":64328040448,"bytes_total":64411926528,"write_bytes_sec":1194,"read_op_per_sec":0,"write_op_per_sec":3},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.bgauhq","status":"up:active","gid":14252}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-25T23:32:57.989925+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 25 18:33:47 np0005535838 systemd[1]: libpod-10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d.scope: Deactivated successfully.
Nov 25 18:33:47 np0005535838 podman[101148]: 2025-11-25 23:33:47.101845544 +0000 UTC m=+0.030075066 container died 10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d (image=quay.io/ceph/ceph:v18, name=youthful_yonath, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 18:33:47 np0005535838 systemd[1]: var-lib-containers-storage-overlay-638e4b05250fb483a9812d6e81a156713fbdffb65b1a0e95ca92487a8e36b669-merged.mount: Deactivated successfully.
Nov 25 18:33:47 np0005535838 podman[101148]: 2025-11-25 23:33:47.137636258 +0000 UTC m=+0.065865750 container remove 10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d (image=quay.io/ceph/ceph:v18, name=youthful_yonath, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:47 np0005535838 systemd[1]: libpod-conmon-10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d.scope: Deactivated successfully.
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]: {
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "osd_id": 2,
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "type": "bluestore"
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:    },
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "osd_id": 1,
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "type": "bluestore"
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:    },
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "osd_id": 0,
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:        "type": "bluestore"
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]:    }
Nov 25 18:33:47 np0005535838 compassionate_tu[101085]: }
Nov 25 18:33:47 np0005535838 systemd[1]: libpod-dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad.scope: Deactivated successfully.
Nov 25 18:33:47 np0005535838 systemd[1]: libpod-dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad.scope: Consumed 1.012s CPU time.
Nov 25 18:33:47 np0005535838 podman[101044]: 2025-11-25 23:33:47.242387635 +0000 UTC m=+1.161248709 container died dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:47 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b3dc709cbd50f3900818d3f9e76cf0f176dd87599e7c791ebea591b386e08f60-merged.mount: Deactivated successfully.
Nov 25 18:33:47 np0005535838 podman[101044]: 2025-11-25 23:33:47.296658111 +0000 UTC m=+1.215519205 container remove dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tu, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 18:33:47 np0005535838 systemd[1]: libpod-conmon-dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad.scope: Deactivated successfully.
Nov 25 18:33:47 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:47 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:47 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:47 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:47 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 606ebbee-60bc-4d6e-8e06-221f1623ff2e does not exist
Nov 25 18:33:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 18:33:48 np0005535838 python3[101386]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:48 np0005535838 podman[101419]: 2025-11-25 23:33:48.179193012 +0000 UTC m=+0.053722642 container create 0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a (image=quay.io/ceph/ceph:v18, name=focused_volhard, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:48 np0005535838 systemd[1]: Started libpod-conmon-0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a.scope.
Nov 25 18:33:48 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fa560be9b574c473870ffd69e901d616d92c5592d46db1827d8d15435289feb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fa560be9b574c473870ffd69e901d616d92c5592d46db1827d8d15435289feb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:48 np0005535838 podman[101419]: 2025-11-25 23:33:48.150661526 +0000 UTC m=+0.025191196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:48 np0005535838 podman[101419]: 2025-11-25 23:33:48.282772242 +0000 UTC m=+0.157301902 container init 0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a (image=quay.io/ceph/ceph:v18, name=focused_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:48 np0005535838 podman[101443]: 2025-11-25 23:33:48.283529321 +0000 UTC m=+0.111887264 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 25 18:33:48 np0005535838 podman[101419]: 2025-11-25 23:33:48.291147166 +0000 UTC m=+0.165676786 container start 0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a (image=quay.io/ceph/ceph:v18, name=focused_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 18:33:48 np0005535838 podman[101419]: 2025-11-25 23:33:48.309671859 +0000 UTC m=+0.184201489 container attach 0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a (image=quay.io/ceph/ceph:v18, name=focused_volhard, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 18:33:48 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:48 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:48 np0005535838 podman[101443]: 2025-11-25 23:33:48.401492721 +0000 UTC m=+0.229850604 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:48 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 18:33:48 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1470785052' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 18:33:48 np0005535838 focused_volhard[101460]: 
Nov 25 18:33:48 np0005535838 focused_volhard[101460]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""}]
Nov 25 18:33:48 np0005535838 systemd[1]: libpod-0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a.scope: Deactivated successfully.
Nov 25 18:33:48 np0005535838 podman[101594]: 2025-11-25 23:33:48.877150037 +0000 UTC m=+0.020723707 container died 0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a (image=quay.io/ceph/ceph:v18, name=focused_volhard, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:48 np0005535838 systemd[1]: var-lib-containers-storage-overlay-1fa560be9b574c473870ffd69e901d616d92c5592d46db1827d8d15435289feb-merged.mount: Deactivated successfully.
Nov 25 18:33:48 np0005535838 podman[101594]: 2025-11-25 23:33:48.992014902 +0000 UTC m=+0.135588562 container remove 0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a (image=quay.io/ceph/ceph:v18, name=focused_volhard, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 18:33:48 np0005535838 systemd[1]: libpod-conmon-0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a.scope: Deactivated successfully.
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:49 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 9da7b15e-4eef-4c77-82c9-a6215d5e6beb does not exist
Nov 25 18:33:49 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev fe68a7cc-c80c-48f0-b63c-a49914e80135 does not exist
Nov 25 18:33:49 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 68cc684d-90b3-4f1e-b500-bcd7c1bb97ff does not exist
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:49 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:33:49 np0005535838 podman[101768]: 2025-11-25 23:33:49.549157848 +0000 UTC m=+0.047810299 container create 28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:49 np0005535838 systemd[1]: Started libpod-conmon-28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413.scope.
Nov 25 18:33:49 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:49 np0005535838 podman[101768]: 2025-11-25 23:33:49.608443175 +0000 UTC m=+0.107095516 container init 28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 18:33:49 np0005535838 podman[101768]: 2025-11-25 23:33:49.522831145 +0000 UTC m=+0.021483546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:49 np0005535838 podman[101768]: 2025-11-25 23:33:49.617796794 +0000 UTC m=+0.116449155 container start 28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:49 np0005535838 determined_shamir[101784]: 167 167
Nov 25 18:33:49 np0005535838 systemd[1]: libpod-28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413.scope: Deactivated successfully.
Nov 25 18:33:49 np0005535838 podman[101768]: 2025-11-25 23:33:49.62133314 +0000 UTC m=+0.119985461 container attach 28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:49 np0005535838 podman[101768]: 2025-11-25 23:33:49.621917015 +0000 UTC m=+0.120569386 container died 28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 18:33:49 np0005535838 systemd[1]: var-lib-containers-storage-overlay-14c9877a4843b81d88f490d937592c54c322f124354c5e8ed6df562e8e8c1678-merged.mount: Deactivated successfully.
Nov 25 18:33:49 np0005535838 podman[101768]: 2025-11-25 23:33:49.663684495 +0000 UTC m=+0.162336856 container remove 28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 18:33:49 np0005535838 systemd[1]: libpod-conmon-28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413.scope: Deactivated successfully.
Nov 25 18:33:49 np0005535838 podman[101809]: 2025-11-25 23:33:49.859426434 +0000 UTC m=+0.038582922 container create 934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bouman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 18:33:49 np0005535838 systemd[1]: Started libpod-conmon-934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950.scope.
Nov 25 18:33:49 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0884ec651dac78951bf0fdc39a7d87f5037c58885636b9bb5f967e10ca17a4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0884ec651dac78951bf0fdc39a7d87f5037c58885636b9bb5f967e10ca17a4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0884ec651dac78951bf0fdc39a7d87f5037c58885636b9bb5f967e10ca17a4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0884ec651dac78951bf0fdc39a7d87f5037c58885636b9bb5f967e10ca17a4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0884ec651dac78951bf0fdc39a7d87f5037c58885636b9bb5f967e10ca17a4b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:49 np0005535838 podman[101809]: 2025-11-25 23:33:49.844318116 +0000 UTC m=+0.023474614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:49 np0005535838 podman[101809]: 2025-11-25 23:33:49.946801799 +0000 UTC m=+0.125958367 container init 934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:33:49 np0005535838 podman[101809]: 2025-11-25 23:33:49.962402479 +0000 UTC m=+0.141558997 container start 934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bouman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:49 np0005535838 podman[101809]: 2025-11-25 23:33:49.967896284 +0000 UTC m=+0.147052842 container attach 934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bouman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 25 18:33:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 18:33:50 np0005535838 python3[101853]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:50 np0005535838 podman[101856]: 2025-11-25 23:33:50.163561792 +0000 UTC m=+0.051237653 container create 86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724 (image=quay.io/ceph/ceph:v18, name=gifted_wozniak, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:33:50 np0005535838 systemd[1]: Started libpod-conmon-86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724.scope.
Nov 25 18:33:50 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:50 np0005535838 podman[101856]: 2025-11-25 23:33:50.147019798 +0000 UTC m=+0.034695669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:50 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf3ae1d7f881332b7bc9939a78c0e177fa21248178a69c919c377bbbcc87398/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:50 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf3ae1d7f881332b7bc9939a78c0e177fa21248178a69c919c377bbbcc87398/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:50 np0005535838 podman[101856]: 2025-11-25 23:33:50.2613408 +0000 UTC m=+0.149016741 container init 86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724 (image=quay.io/ceph/ceph:v18, name=gifted_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 18:33:50 np0005535838 podman[101856]: 2025-11-25 23:33:50.270427271 +0000 UTC m=+0.158103162 container start 86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724 (image=quay.io/ceph/ceph:v18, name=gifted_wozniak, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:50 np0005535838 podman[101856]: 2025-11-25 23:33:50.274001309 +0000 UTC m=+0.161677200 container attach 86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724 (image=quay.io/ceph/ceph:v18, name=gifted_wozniak, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 25 18:33:50 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2840893065' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 25 18:33:50 np0005535838 gifted_wozniak[101871]: mimic
Nov 25 18:33:50 np0005535838 systemd[1]: libpod-86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724.scope: Deactivated successfully.
Nov 25 18:33:50 np0005535838 podman[101856]: 2025-11-25 23:33:50.851286016 +0000 UTC m=+0.738961957 container died 86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724 (image=quay.io/ceph/ceph:v18, name=gifted_wozniak, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:50 np0005535838 systemd[1]: var-lib-containers-storage-overlay-ccf3ae1d7f881332b7bc9939a78c0e177fa21248178a69c919c377bbbcc87398-merged.mount: Deactivated successfully.
Nov 25 18:33:50 np0005535838 podman[101856]: 2025-11-25 23:33:50.905336296 +0000 UTC m=+0.793012147 container remove 86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724 (image=quay.io/ceph/ceph:v18, name=gifted_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:50 np0005535838 systemd[1]: libpod-conmon-86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724.scope: Deactivated successfully.
Nov 25 18:33:51 np0005535838 hungry_bouman[101846]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:33:51 np0005535838 hungry_bouman[101846]: --> relative data size: 1.0
Nov 25 18:33:51 np0005535838 hungry_bouman[101846]: --> All data devices are unavailable
Nov 25 18:33:51 np0005535838 systemd[1]: libpod-934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950.scope: Deactivated successfully.
Nov 25 18:33:51 np0005535838 systemd[1]: libpod-934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950.scope: Consumed 1.044s CPU time.
Nov 25 18:33:51 np0005535838 podman[101809]: 2025-11-25 23:33:51.057751298 +0000 UTC m=+1.236907816 container died 934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bouman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 18:33:51 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b0884ec651dac78951bf0fdc39a7d87f5037c58885636b9bb5f967e10ca17a4b-merged.mount: Deactivated successfully.
Nov 25 18:33:51 np0005535838 podman[101809]: 2025-11-25 23:33:51.124621531 +0000 UTC m=+1.303778029 container remove 934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bouman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:51 np0005535838 systemd[1]: libpod-conmon-934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950.scope: Deactivated successfully.
Nov 25 18:33:51 np0005535838 podman[102111]: 2025-11-25 23:33:51.866870897 +0000 UTC m=+0.061615876 container create 8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:51 np0005535838 systemd[1]: Started libpod-conmon-8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c.scope.
Nov 25 18:33:51 np0005535838 python3[102110]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:33:51 np0005535838 podman[102111]: 2025-11-25 23:33:51.839384747 +0000 UTC m=+0.034129766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:51 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:51 np0005535838 podman[102111]: 2025-11-25 23:33:51.963835765 +0000 UTC m=+0.158580784 container init 8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 25 18:33:51 np0005535838 podman[102111]: 2025-11-25 23:33:51.970401775 +0000 UTC m=+0.165146714 container start 8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:51 np0005535838 distracted_wright[102128]: 167 167
Nov 25 18:33:51 np0005535838 systemd[1]: libpod-8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c.scope: Deactivated successfully.
Nov 25 18:33:51 np0005535838 podman[102111]: 2025-11-25 23:33:51.974518236 +0000 UTC m=+0.169263205 container attach 8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 18:33:51 np0005535838 conmon[102128]: conmon 8787b13a45966a4f5ba1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c.scope/container/memory.events
Nov 25 18:33:51 np0005535838 podman[102111]: 2025-11-25 23:33:51.976290409 +0000 UTC m=+0.171035378 container died 8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 25 18:33:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 18:33:52 np0005535838 systemd[1]: var-lib-containers-storage-overlay-1c2557da1c7e0c6744c6a6926da1353facbc7e62f7bde4dfc9c523f2f08445d1-merged.mount: Deactivated successfully.
Nov 25 18:33:52 np0005535838 podman[102111]: 2025-11-25 23:33:52.030052833 +0000 UTC m=+0.224797792 container remove 8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:52 np0005535838 podman[102131]: 2025-11-25 23:33:52.035369973 +0000 UTC m=+0.084147167 container create 3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654 (image=quay.io/ceph/ceph:v18, name=crazy_einstein, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 18:33:52 np0005535838 systemd[1]: libpod-conmon-8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c.scope: Deactivated successfully.
Nov 25 18:33:52 np0005535838 systemd[1]: Started libpod-conmon-3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654.scope.
Nov 25 18:33:52 np0005535838 podman[102131]: 2025-11-25 23:33:51.997808085 +0000 UTC m=+0.046585319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 18:33:52 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:52 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb5d94a7754066c2649dde9d59fd3f447942b1fc7c1a5779cfe3835d399ea9f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:52 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb5d94a7754066c2649dde9d59fd3f447942b1fc7c1a5779cfe3835d399ea9f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:52 np0005535838 podman[102131]: 2025-11-25 23:33:52.12169037 +0000 UTC m=+0.170467524 container init 3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654 (image=quay.io/ceph/ceph:v18, name=crazy_einstein, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 18:33:52 np0005535838 podman[102131]: 2025-11-25 23:33:52.129039429 +0000 UTC m=+0.177816613 container start 3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654 (image=quay.io/ceph/ceph:v18, name=crazy_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:52 np0005535838 podman[102131]: 2025-11-25 23:33:52.132679158 +0000 UTC m=+0.181456312 container attach 3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654 (image=quay.io/ceph/ceph:v18, name=crazy_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 18:33:52 np0005535838 podman[102169]: 2025-11-25 23:33:52.247454491 +0000 UTC m=+0.062035315 container create 2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 18:33:52 np0005535838 systemd[1]: Started libpod-conmon-2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07.scope.
Nov 25 18:33:52 np0005535838 podman[102169]: 2025-11-25 23:33:52.222208595 +0000 UTC m=+0.036789469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:52 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:52 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677bea79493d566ecf8d2b7fb295dcf53304fda40e618ccf4d1ab3055c6bc946/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:52 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677bea79493d566ecf8d2b7fb295dcf53304fda40e618ccf4d1ab3055c6bc946/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:52 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677bea79493d566ecf8d2b7fb295dcf53304fda40e618ccf4d1ab3055c6bc946/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:52 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677bea79493d566ecf8d2b7fb295dcf53304fda40e618ccf4d1ab3055c6bc946/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:52 np0005535838 podman[102169]: 2025-11-25 23:33:52.358822271 +0000 UTC m=+0.173403145 container init 2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 18:33:52 np0005535838 podman[102169]: 2025-11-25 23:33:52.373725225 +0000 UTC m=+0.188306039 container start 2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:33:52 np0005535838 podman[102169]: 2025-11-25 23:33:52.377262712 +0000 UTC m=+0.191843586 container attach 2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 18:33:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 25 18:33:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357608728' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 25 18:33:52 np0005535838 crazy_einstein[102160]: 
Nov 25 18:33:52 np0005535838 crazy_einstein[102160]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Nov 25 18:33:52 np0005535838 systemd[1]: libpod-3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654.scope: Deactivated successfully.
Nov 25 18:33:52 np0005535838 podman[102131]: 2025-11-25 23:33:52.742920341 +0000 UTC m=+0.791697565 container died 3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654 (image=quay.io/ceph/ceph:v18, name=crazy_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:33:52 np0005535838 systemd[1]: var-lib-containers-storage-overlay-bb5d94a7754066c2649dde9d59fd3f447942b1fc7c1a5779cfe3835d399ea9f1-merged.mount: Deactivated successfully.
Nov 25 18:33:52 np0005535838 podman[102131]: 2025-11-25 23:33:52.80106435 +0000 UTC m=+0.849841544 container remove 3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654 (image=quay.io/ceph/ceph:v18, name=crazy_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:52 np0005535838 systemd[1]: libpod-conmon-3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654.scope: Deactivated successfully.
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]: {
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:    "0": [
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:        {
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "devices": [
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "/dev/loop3"
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            ],
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_name": "ceph_lv0",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_size": "21470642176",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "name": "ceph_lv0",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "tags": {
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.crush_device_class": "",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.encrypted": "0",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.osd_id": "0",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.type": "block",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.vdo": "0"
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            },
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "type": "block",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "vg_name": "ceph_vg0"
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:        }
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:    ],
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:    "1": [
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:        {
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "devices": [
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "/dev/loop4"
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            ],
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_name": "ceph_lv1",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_size": "21470642176",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "name": "ceph_lv1",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "tags": {
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.crush_device_class": "",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.encrypted": "0",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.osd_id": "1",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.type": "block",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.vdo": "0"
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            },
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "type": "block",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "vg_name": "ceph_vg1"
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:        }
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:    ],
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:    "2": [
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:        {
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "devices": [
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "/dev/loop5"
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            ],
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_name": "ceph_lv2",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_size": "21470642176",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "name": "ceph_lv2",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "tags": {
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.cluster_name": "ceph",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.crush_device_class": "",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.encrypted": "0",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.osd_id": "2",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.type": "block",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:                "ceph.vdo": "0"
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            },
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "type": "block",
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:            "vg_name": "ceph_vg2"
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:        }
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]:    ]
Nov 25 18:33:53 np0005535838 hopeful_wright[102186]: }
Nov 25 18:33:53 np0005535838 systemd[1]: libpod-2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07.scope: Deactivated successfully.
Nov 25 18:33:53 np0005535838 podman[102228]: 2025-11-25 23:33:53.239082607 +0000 UTC m=+0.028127098 container died 2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:33:53 np0005535838 systemd[1]: var-lib-containers-storage-overlay-677bea79493d566ecf8d2b7fb295dcf53304fda40e618ccf4d1ab3055c6bc946-merged.mount: Deactivated successfully.
Nov 25 18:33:53 np0005535838 irqbalance[783]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 25 18:33:53 np0005535838 irqbalance[783]: IRQ 27 affinity is now unmanaged
Nov 25 18:33:53 np0005535838 podman[102228]: 2025-11-25 23:33:53.296965171 +0000 UTC m=+0.086009612 container remove 2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 25 18:33:53 np0005535838 systemd[1]: libpod-conmon-2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07.scope: Deactivated successfully.
Nov 25 18:33:53 np0005535838 podman[102381]: 2025-11-25 23:33:53.993560142 +0000 UTC m=+0.036792089 container create 70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 18:33:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 18:33:54 np0005535838 systemd[1]: Started libpod-conmon-70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964.scope.
Nov 25 18:33:54 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:54 np0005535838 podman[102381]: 2025-11-25 23:33:54.065488308 +0000 UTC m=+0.108720275 container init 70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_haibt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 18:33:54 np0005535838 podman[102381]: 2025-11-25 23:33:53.974707472 +0000 UTC m=+0.017939469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:54 np0005535838 podman[102381]: 2025-11-25 23:33:54.071895635 +0000 UTC m=+0.115127612 container start 70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:33:54 np0005535838 compassionate_haibt[102397]: 167 167
Nov 25 18:33:54 np0005535838 podman[102381]: 2025-11-25 23:33:54.075399091 +0000 UTC m=+0.118631038 container attach 70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:54 np0005535838 systemd[1]: libpod-70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964.scope: Deactivated successfully.
Nov 25 18:33:54 np0005535838 podman[102381]: 2025-11-25 23:33:54.076561418 +0000 UTC m=+0.119793396 container died 70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_haibt, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:33:54 np0005535838 systemd[1]: var-lib-containers-storage-overlay-77230e5533964828db4a8b5d68fb85c6688195af4308f87e6ce5dc8117e9a18e-merged.mount: Deactivated successfully.
Nov 25 18:33:54 np0005535838 podman[102381]: 2025-11-25 23:33:54.113937432 +0000 UTC m=+0.157169409 container remove 70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_haibt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:33:54 np0005535838 systemd[1]: libpod-conmon-70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964.scope: Deactivated successfully.
Nov 25 18:33:54 np0005535838 podman[102419]: 2025-11-25 23:33:54.252583707 +0000 UTC m=+0.038907391 container create 99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:33:54 np0005535838 systemd[1]: Started libpod-conmon-99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3.scope.
Nov 25 18:33:54 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:33:54 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65119ed7c1f89c8c937150332c19b793dcbfa5002c0d8399c4a9a416aac63b77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:54 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65119ed7c1f89c8c937150332c19b793dcbfa5002c0d8399c4a9a416aac63b77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:54 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65119ed7c1f89c8c937150332c19b793dcbfa5002c0d8399c4a9a416aac63b77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:54 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65119ed7c1f89c8c937150332c19b793dcbfa5002c0d8399c4a9a416aac63b77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:33:54 np0005535838 podman[102419]: 2025-11-25 23:33:54.317033811 +0000 UTC m=+0.103357505 container init 99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 18:33:54 np0005535838 podman[102419]: 2025-11-25 23:33:54.323656113 +0000 UTC m=+0.109979797 container start 99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:33:54 np0005535838 podman[102419]: 2025-11-25 23:33:54.326736508 +0000 UTC m=+0.113060212 container attach 99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:33:54 np0005535838 podman[102419]: 2025-11-25 23:33:54.235982662 +0000 UTC m=+0.022306376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:33:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:33:55 np0005535838 musing_galileo[102435]: {
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "osd_id": 2,
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "type": "bluestore"
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:    },
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "osd_id": 1,
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "type": "bluestore"
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:    },
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "osd_id": 0,
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:        "type": "bluestore"
Nov 25 18:33:55 np0005535838 musing_galileo[102435]:    }
Nov 25 18:33:55 np0005535838 musing_galileo[102435]: }
Nov 25 18:33:55 np0005535838 systemd[1]: libpod-99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3.scope: Deactivated successfully.
Nov 25 18:33:55 np0005535838 podman[102419]: 2025-11-25 23:33:55.283003591 +0000 UTC m=+1.069327275 container died 99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 18:33:55 np0005535838 systemd[1]: var-lib-containers-storage-overlay-65119ed7c1f89c8c937150332c19b793dcbfa5002c0d8399c4a9a416aac63b77-merged.mount: Deactivated successfully.
Nov 25 18:33:55 np0005535838 podman[102419]: 2025-11-25 23:33:55.344620515 +0000 UTC m=+1.130944189 container remove 99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 18:33:55 np0005535838 systemd[1]: libpod-conmon-99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3.scope: Deactivated successfully.
Nov 25 18:33:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:33:55 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:33:55 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:55 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 87346fcb-af3e-4939-85f9-4a21fdfa9d2a does not exist
Nov 25 18:33:55 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:55 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:33:55 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:33:55
Nov 25 18:33:55 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:33:55 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:33:55 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'backups', 'cephfs.cephfs.meta', 'images']
Nov 25 18:33:55 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 18:33:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 18:33:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:33:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 25 18:33:56 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 18:33:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 25 18:33:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 25 18:33:56 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 25 18:33:56 np0005535838 ceph-mgr[75954]: [progress INFO root] update: starting ev a294a9b9-bd08-41e2-b985-63fc655b363c (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 25 18:33:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 18:33:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 18:33:56 np0005535838 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 18:33:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 25 18:33:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 25 18:33:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 25 18:33:57 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 25 18:33:57 np0005535838 ceph-mgr[75954]: [progress INFO root] update: starting ev 1c51dc09-4661-4ea2-ada0-20b90163e486 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 25 18:33:57 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 25 18:33:57 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 18:33:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 18:33:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 18:33:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v86: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 25 18:33:58 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 35 pg[2.0( empty local-lis/les=17/18 n=0 ec=15/15 lis/c=17/17 les/c/f=18/18/0 sis=35 pruub=11.871255875s) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active pruub 69.295204163s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:33:58 np0005535838 ceph-mgr[75954]: [progress INFO root] update: starting ev 05bce111-1627-4951-bdf9-8e4cc223bd79 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 25 18:33:58 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 35 pg[2.0( empty local-lis/les=17/18 n=0 ec=15/15 lis/c=17/17 les/c/f=18/18/0 sis=35 pruub=11.871255875s) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown pruub 69.295204163s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 18:33:58 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=35 pruub=10.114383698s) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active pruub 73.546501160s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=35 pruub=10.114383698s) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown pruub 73.546501160s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 25 18:33:59 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 25 18:33:59 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 18:33:59 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 18:33:59 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 18:33:59 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 25 18:33:59 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 25 18:33:59 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 25 18:33:59 np0005535838 ceph-mgr[75954]: [progress INFO root] update: starting ev b1f63197-f9e1-4ab4-b487-3863e818d0ec (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 25 18:33:59 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 25 18:33:59 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1c( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1e( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1f( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1d( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1b( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.9( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.7( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.8( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.5( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.6( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.a( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.3( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.4( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.2( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.d( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.b( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.e( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1f( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.10( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.12( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1d( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.b( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.11( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.a( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.13( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.8( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.6( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.14( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.15( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.c( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.16( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.f( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.18( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.17( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.19( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1c( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.5( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.9( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1a( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.0( empty local-lis/les=35/36 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.2( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.4( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.10( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.0( empty local-lis/les=35/36 n=0 ec=15/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.14( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.13( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.19( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:33:59 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v89: 69 pgs: 62 unknown, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 25 18:34:00 np0005535838 ceph-mgr[75954]: [progress INFO root] update: starting ev 3ace7068-2acb-4dfe-803b-f2931d775e25 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:00 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:01 np0005535838 ceph-mgr[75954]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=10.073327065s) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active pruub 79.905021667s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=10.073327065s) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown pruub 79.905021667s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 25 18:34:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 25 18:34:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 25 18:34:01 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 25 18:34:01 np0005535838 ceph-mgr[75954]: [progress INFO root] update: starting ev 7699bc09-a2a8-4e48-b17a-5c13d7a72e5a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 25 18:34:01 np0005535838 ceph-mgr[75954]: [progress INFO root] complete: finished ev a294a9b9-bd08-41e2-b985-63fc655b363c (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 25 18:34:01 np0005535838 ceph-mgr[75954]: [progress INFO root] Completed event a294a9b9-bd08-41e2-b985-63fc655b363c (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Nov 25 18:34:01 np0005535838 ceph-mgr[75954]: [progress INFO root] complete: finished ev 1c51dc09-4661-4ea2-ada0-20b90163e486 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 25 18:34:01 np0005535838 ceph-mgr[75954]: [progress INFO root] Completed event 1c51dc09-4661-4ea2-ada0-20b90163e486 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Nov 25 18:34:01 np0005535838 ceph-mgr[75954]: [progress INFO root] complete: finished ev 05bce111-1627-4951-bdf9-8e4cc223bd79 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 25 18:34:01 np0005535838 ceph-mgr[75954]: [progress INFO root] Completed event 05bce111-1627-4951-bdf9-8e4cc223bd79 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Nov 25 18:34:01 np0005535838 ceph-mgr[75954]: [progress INFO root] complete: finished ev b1f63197-f9e1-4ab4-b487-3863e818d0ec (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 25 18:34:01 np0005535838 ceph-mgr[75954]: [progress INFO root] Completed event b1f63197-f9e1-4ab4-b487-3863e818d0ec (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Nov 25 18:34:01 np0005535838 ceph-mgr[75954]: [progress INFO root] complete: finished ev 3ace7068-2acb-4dfe-803b-f2931d775e25 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 25 18:34:01 np0005535838 ceph-mgr[75954]: [progress INFO root] Completed event 3ace7068-2acb-4dfe-803b-f2931d775e25 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Nov 25 18:34:01 np0005535838 ceph-mgr[75954]: [progress INFO root] complete: finished ev 7699bc09-a2a8-4e48-b17a-5c13d7a72e5a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 25 18:34:01 np0005535838 ceph-mgr[75954]: [progress INFO root] Completed event 7699bc09-a2a8-4e48-b17a-5c13d7a72e5a (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 25 18:34:01 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 25 18:34:01 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 18:34:01 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 18:34:01 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 18:34:01 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.0( empty local-lis/les=37/38 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:01 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v92: 131 pgs: 62 unknown, 69 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:34:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 18:34:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 25 18:34:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 25 18:34:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 25 18:34:02 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:02 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 25 18:34:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 18:34:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 25 18:34:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 25 18:34:02 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 37 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=10.846710205s) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 72.344696045s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=10.846710205s) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown pruub 72.344696045s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.1f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.1b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.1c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.10( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.17( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:02 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 39 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=14.256612778s) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active pruub 81.685195923s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 39 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=14.256612778s) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown pruub 81.685195923s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 25 18:34:03 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 18:34:03 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 25 18:34:03 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 25 18:34:03 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 39 pg[6.0( v 32'39 (0'0,32'39] local-lis/les=22/23 n=22 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=11.870360374s) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 32'38 mlcod 32'38 active pruub 83.977050781s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.0( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=11.870360374s) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 32'38 mlcod 0'0 unknown pruub 83.977050781s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.12( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.10( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.17( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.14( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.7( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.19( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=22/23 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.10( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.17( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=39/40 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=37/40 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:03 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v95: 177 pgs: 77 unknown, 100 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Nov 25 18:34:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 25 18:34:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 25 18:34:04 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.0( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 32'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:04 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
Nov 25 18:34:04 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
Nov 25 18:34:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:34:05 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 25 18:34:05 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 25 18:34:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v97: 177 pgs: 77 unknown, 100 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:34:06 np0005535838 ceph-mgr[75954]: [progress INFO root] Writing back 10 completed events
Nov 25 18:34:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 18:34:06 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:34:06 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:34:07 np0005535838 systemd-logind[789]: New session 34 of user zuul.
Nov 25 18:34:07 np0005535838 systemd[1]: Started Session 34 of User zuul.
Nov 25 18:34:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v98: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487118721s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.075416565s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487162590s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.075515747s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487066269s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.075515747s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487041473s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.075500488s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486937523s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.075416565s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.541646957s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.130157471s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486957550s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.075500488s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.541588783s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.130157471s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487282753s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.075973511s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.536790848s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.125488281s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487258911s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.075973511s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.536728859s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.125488281s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487183571s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076034546s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487133026s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076034546s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540464401s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.129493713s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487164497s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076225281s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486917496s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.075981140s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487134933s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076225281s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540402412s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129493713s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540250778s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.129425049s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486831665s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.075981140s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486952782s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076194763s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540205002s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129425049s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486927032s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076194763s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486803055s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076232910s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486779213s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076232910s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540124893s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.129638672s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486709595s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076293945s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486627579s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076232910s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486623764s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076293945s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.539930344s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.129531860s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486513138s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076232910s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.539773941s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129531860s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540101051s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.130050659s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540071487s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.130050659s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486405373s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076408386s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486298561s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076354980s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486351013s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076408386s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.539968491s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.130088806s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486248016s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076354980s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.539942741s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.130088806s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486095428s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076354980s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486060143s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076354980s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486003876s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076400757s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486073494s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076492310s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540076256s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129638672s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486012459s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076492310s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485896111s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076400757s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485866547s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076431274s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485957146s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076538086s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485918999s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076538086s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485754967s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076446533s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485826492s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076614380s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485813141s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076431274s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485706329s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076446533s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485775948s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076614380s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456751823s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465843201s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456731796s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465843201s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520897865s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530090332s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520885468s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530090332s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520854950s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530136108s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520842552s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530136108s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456146240s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465744019s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456072807s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465744019s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455876350s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465759277s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520113945s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530021667s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455842018s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520085335s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530021667s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455184937s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465225220s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519988060s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530075073s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455161095s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519961357s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530075073s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519800186s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530029297s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519779205s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530029297s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454953194s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465225220s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454927444s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519281387s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530158997s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519255638s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530158997s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454054832s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465164185s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454019547s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465164185s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518911362s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530174255s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518884659s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530174255s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518821716s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530166626s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518773079s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530166626s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453702927s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465126038s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453660965s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454210281s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465759277s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518668175s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530258179s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518647194s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530258179s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454153061s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453030586s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464744568s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518463135s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530212402s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452983856s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464744568s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518447876s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530212402s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452710152s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464645386s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452659607s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452497482s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464584351s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518067360s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530242920s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452450752s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464584351s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518048286s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530242920s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518050194s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530319214s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453290939s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465820312s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518003464s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453218460s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465820312s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451832771s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464576721s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451805115s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452794075s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465126038s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452293396s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517469406s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530319214s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451684952s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464576721s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517417908s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451636314s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517217636s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530265808s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517190933s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530265808s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451214790s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464385986s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452562332s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465751648s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451136589s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464385986s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452430725s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465751648s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451215744s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464645386s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451201439s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516825676s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530311584s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450861931s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464378357s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450849533s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464378357s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516798973s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530311584s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516772270s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530380249s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516757965s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530380249s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450617790s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464271545s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450549126s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464279175s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450786591s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464523315s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450569153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464271545s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516482353s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530273438s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516411781s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530273438s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450525284s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464279175s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516325951s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530418396s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516311646s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530418396s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516283035s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530479431s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516271591s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530479431s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450763702s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464523315s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515798569s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530448914s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515743256s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530448914s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.513943672s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805473328s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.513921738s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805473328s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448771477s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740425110s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448755264s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740425110s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448654175s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740394592s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448639870s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740394592s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448849678s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464324951s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448562622s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740417480s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448523521s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740417480s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.513177872s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805099487s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.513118744s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805099487s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448781013s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464324951s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447863579s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740653992s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.512705803s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805541992s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447410583s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740371704s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447385788s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740371704s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.512588501s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805541992s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447697639s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740653992s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447261810s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740386963s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447239876s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740386963s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447119713s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740371704s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.512052536s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805320740s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511995316s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805313110s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511977196s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805313110s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.446464539s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739807129s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.512003899s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805320740s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447058678s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740371704s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.446429253s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739807129s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.446962357s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740394592s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.446947098s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740394592s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511787415s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805274963s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511771202s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805305481s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511754990s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805305481s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511736870s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805274963s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511671066s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805328369s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511650085s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805366516s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511633873s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805366516s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511618614s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805328369s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511734009s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805572510s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511713028s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805572510s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445724487s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739570618s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445761681s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739692688s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445349693s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739334106s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445615768s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739570618s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445330620s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739334106s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511328697s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805397034s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445724487s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739692688s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445547104s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739692688s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511263847s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805397034s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445530891s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739692688s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444708824s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739219666s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510831833s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805419922s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444672585s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739219666s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510773659s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805419922s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444632530s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739364624s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510651588s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805412292s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510663033s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805442810s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510628700s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805412292s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444575310s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739364624s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510622025s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805442810s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444879532s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739822388s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444858551s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739822388s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510386467s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805358887s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510434151s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805465698s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444198608s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739227295s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444249153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739318848s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510313988s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805358887s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510400772s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805465698s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444231987s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739318848s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510384560s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805511475s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444146156s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739227295s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510334015s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805511475s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444341660s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739570618s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510371208s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805610657s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444326401s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739570618s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510343552s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805610657s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.443850517s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739219666s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.443834305s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739219666s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510123253s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805511475s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510214806s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805618286s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510066032s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805511475s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510153770s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805618286s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436471939s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.733673096s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436425209s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.733673096s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 25 18:34:08 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 25 18:34:08 np0005535838 python3.9[102686]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 25 18:34:08 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 25 18:34:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 25 18:34:09 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 18:34:09 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 18:34:09 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 18:34:09 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 18:34:09 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 18:34:09 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 18:34:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 25 18:34:09 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 25 18:34:09 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 25 18:34:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v101: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:34:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 25 18:34:10 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 18:34:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 25 18:34:10 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 18:34:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 25 18:34:10 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 18:34:10 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.510518074s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.130073547s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:10 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.510458946s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.130073547s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:10 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509833336s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.129592896s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:10 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509752274s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129592896s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:10 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.505555153s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.125503540s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:10 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.505515099s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.125503540s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:10 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509609222s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.129646301s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:10 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509572029s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129646301s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:10 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 25 18:34:10 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:10 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:10 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:10 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:34:10 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Nov 25 18:34:10 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Nov 25 18:34:10 np0005535838 python3.9[102904]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:34:11 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 25 18:34:11 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 25 18:34:11 np0005535838 ceph-mgr[75954]: [progress INFO root] Completed event 39061ea4-a72e-4426-a139-8eb608550a46 (Global Recovery Event) in 10 seconds
Nov 25 18:34:11 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 25 18:34:11 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 18:34:11 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:11 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:11 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:11 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v104: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 199 B/s, 2 keys/s, 2 objects/s recovering
Nov 25 18:34:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 25 18:34:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 18:34:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 25 18:34:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 18:34:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 25 18:34:12 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 25 18:34:12 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976484299s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309509277s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:12 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976387978s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309509277s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:12 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976278305s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309593201s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:12 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 18:34:12 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976189613s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309593201s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:12 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976170540s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309707642s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:12 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976113319s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309707642s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:12 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976114273s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309745789s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:12 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976060867s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309745789s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:12 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:12 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:12 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:12 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 25 18:34:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 25 18:34:13 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 25 18:34:13 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 18:34:13 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:13 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:13 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:13 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:13 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 25 18:34:13 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 25 18:34:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v107: 177 pgs: 4 peering, 1 active+recovering, 172 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 199 B/s, 2 keys/s, 2 objects/s recovering
Nov 25 18:34:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:34:15 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 25 18:34:15 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 25 18:34:15 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 25 18:34:15 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 25 18:34:15 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 25 18:34:15 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 25 18:34:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v108: 177 pgs: 4 peering, 1 active+recovering, 172 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 135 B/s, 1 keys/s, 2 objects/s recovering
Nov 25 18:34:16 np0005535838 ceph-mgr[75954]: [progress INFO root] Writing back 11 completed events
Nov 25 18:34:16 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 18:34:16 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:34:16 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:34:16 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 25 18:34:16 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 25 18:34:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v109: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 2 keys/s, 2 objects/s recovering
Nov 25 18:34:18 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 25 18:34:18 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 18:34:18 np0005535838 systemd[1]: session-34.scope: Deactivated successfully.
Nov 25 18:34:18 np0005535838 systemd[1]: session-34.scope: Consumed 8.725s CPU time.
Nov 25 18:34:18 np0005535838 systemd-logind[789]: Session 34 logged out. Waiting for processes to exit.
Nov 25 18:34:18 np0005535838 systemd-logind[789]: Removed session 34.
Nov 25 18:34:18 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 25 18:34:18 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 18:34:18 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 18:34:18 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 25 18:34:18 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 25 18:34:18 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 25 18:34:18 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 25 18:34:19 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 18:34:19 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.281056404s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 97.130294800s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:19 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.281001091s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.130294800s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:19 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.279815674s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 97.129615784s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:19 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:19 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.279746056s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.129615784s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:19 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v111: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 80 B/s, 1 keys/s, 1 objects/s recovering
Nov 25 18:34:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 25 18:34:20 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 18:34:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:34:20 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.15 deep-scrub starts
Nov 25 18:34:20 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.10 deep-scrub starts
Nov 25 18:34:20 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.10 deep-scrub ok
Nov 25 18:34:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 25 18:34:20 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.15 deep-scrub ok
Nov 25 18:34:20 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 18:34:20 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 18:34:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 25 18:34:20 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 25 18:34:20 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182587624s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active pruub 97.309936523s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:20 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.309936523s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:20 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182564735s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active pruub 97.310157776s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:20 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.310157776s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:20 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:20 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:20 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:20 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:21 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.13 deep-scrub starts
Nov 25 18:34:21 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.13 deep-scrub ok
Nov 25 18:34:21 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 18:34:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 25 18:34:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 25 18:34:21 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 25 18:34:21 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:21 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=49/50 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v114: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 218 B/s, 1 keys/s, 1 objects/s recovering
Nov 25 18:34:22 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 25 18:34:22 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 18:34:22 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 25 18:34:22 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 25 18:34:22 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 25 18:34:22 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 18:34:22 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 25 18:34:22 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 25 18:34:22 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 18:34:23 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 25 18:34:23 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 25 18:34:23 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 25 18:34:23 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 25 18:34:23 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 18:34:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v116: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 144 B/s, 1 objects/s recovering
Nov 25 18:34:24 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 25 18:34:24 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 18:34:24 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Nov 25 18:34:24 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Nov 25 18:34:24 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 25 18:34:24 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 18:34:24 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 18:34:24 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 25 18:34:24 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 25 18:34:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:34:25 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 25 18:34:25 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 25 18:34:25 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 18:34:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v118: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 163 B/s, 1 objects/s recovering
Nov 25 18:34:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 25 18:34:26 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 18:34:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:34:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:34:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:34:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:34:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:34:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:34:26 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 25 18:34:26 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 25 18:34:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 25 18:34:26 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 18:34:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 25 18:34:26 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 25 18:34:27 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 18:34:27 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 25 18:34:27 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 25 18:34:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 18:34:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 175 B/s, 0 objects/s recovering
Nov 25 18:34:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 25 18:34:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 18:34:28 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts
Nov 25 18:34:28 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok
Nov 25 18:34:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 25 18:34:29 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 18:34:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 18:34:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 25 18:34:29 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 25 18:34:29 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.088303566s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 105.309837341s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:29 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.087999344s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.309837341s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:29 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:29 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 53 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53 pruub=15.581254959s) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 113.130058289s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:29 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53 pruub=15.581200600s) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.130058289s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:29 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:29 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Nov 25 18:34:29 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Nov 25 18:34:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v122: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 148 B/s, 0 objects/s recovering
Nov 25 18:34:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 25 18:34:30 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 18:34:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 25 18:34:30 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 18:34:30 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 18:34:30 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 18:34:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 25 18:34:30 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 25 18:34:30 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=53/55 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:30 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=54/55 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:30 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 25 18:34:30 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 25 18:34:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:34:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 25 18:34:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590789795s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 107.316719055s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590709686s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.316719055s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:30 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 25 18:34:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 25 18:34:31 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 18:34:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 25 18:34:31 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 25 18:34:31 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=55/56 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:31 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 25 18:34:31 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 25 18:34:31 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 25 18:34:31 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 25 18:34:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v125: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:34:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 25 18:34:32 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 18:34:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 25 18:34:32 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 18:34:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 25 18:34:32 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 25 18:34:32 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 18:34:33 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 18:34:33 np0005535838 systemd-logind[789]: New session 35 of user zuul.
Nov 25 18:34:33 np0005535838 systemd[1]: Started Session 35 of User zuul.
Nov 25 18:34:33 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=11.247831345s) [1] r=-1 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 32'39 active pruub 113.683471680s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:33 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=11.247759819s) [1] r=-1 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 113.683471680s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:33 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v127: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:34:34 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 25 18:34:34 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 18:34:34 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 25 18:34:34 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 18:34:34 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 18:34:34 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 25 18:34:34 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 25 18:34:34 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:34 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 25 18:34:34 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 25 18:34:34 np0005535838 python3.9[103120]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 18:34:34 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 25 18:34:34 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 25 18:34:35 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 18:34:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:34:35 np0005535838 python3.9[103294]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:34:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v129: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:34:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 25 18:34:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 18:34:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 25 18:34:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 18:34:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 25 18:34:36 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 25 18:34:36 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 18:34:36 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=9.230822563s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 32'39 active pruub 114.456108093s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:36 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=9.230740547s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 114.456108093s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:36 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:36 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.8 deep-scrub starts
Nov 25 18:34:36 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.8 deep-scrub ok
Nov 25 18:34:37 np0005535838 python3.9[103450]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:34:37 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 25 18:34:37 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 25 18:34:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 25 18:34:37 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 18:34:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 25 18:34:37 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 25 18:34:37 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v132: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 18:34:38 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 25 18:34:38 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 18:34:38 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 25 18:34:38 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 18:34:38 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 25 18:34:38 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 25 18:34:38 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 18:34:38 np0005535838 python3.9[103603]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:34:38 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 25 18:34:38 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 25 18:34:39 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 18:34:39 np0005535838 python3.9[103757]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:34:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 18:34:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 25 18:34:40 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 18:34:40 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 25 18:34:40 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 25 18:34:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 25 18:34:40 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 18:34:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 25 18:34:40 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 25 18:34:40 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 18:34:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:34:40 np0005535838 python3.9[103909]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:34:40 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=12.850687027s) [2] r=-1 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 32'39 active pruub 121.687095642s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:34:40 np0005535838 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=12.850586891s) [2] r=-1 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 121.687095642s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:34:40 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:34:40 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 25 18:34:40 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 25 18:34:41 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 25 18:34:41 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 25 18:34:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 25 18:34:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 25 18:34:41 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 25 18:34:41 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 18:34:41 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:34:41 np0005535838 python3.9[104059]: ansible-ansible.builtin.service_facts Invoked
Nov 25 18:34:41 np0005535838 network[104076]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 18:34:41 np0005535838 network[104077]: 'network-scripts' will be removed from distribution in near future.
Nov 25 18:34:41 np0005535838 network[104078]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 18:34:41 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 25 18:34:41 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 25 18:34:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v137: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 18 B/s, 0 objects/s recovering
Nov 25 18:34:42 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 25 18:34:42 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 25 18:34:43 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 25 18:34:43 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 25 18:34:43 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 25 18:34:43 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 25 18:34:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 15 B/s, 0 objects/s recovering
Nov 25 18:34:44 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 25 18:34:44 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 25 18:34:44 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 25 18:34:44 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 25 18:34:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:34:45 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.12 deep-scrub starts
Nov 25 18:34:45 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.12 deep-scrub ok
Nov 25 18:34:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 11 B/s, 0 objects/s recovering
Nov 25 18:34:46 np0005535838 python3.9[104338]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:34:46 np0005535838 python3.9[104488]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:34:47 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 25 18:34:47 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 25 18:34:47 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 25 18:34:47 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 25 18:34:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 83 B/s, 0 objects/s recovering
Nov 25 18:34:48 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 25 18:34:48 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 25 18:34:48 np0005535838 python3.9[104642]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:34:49 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 25 18:34:49 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 25 18:34:49 np0005535838 python3.9[104800]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:34:50 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 25 18:34:50 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 25 18:34:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 B/s, 0 objects/s recovering
Nov 25 18:34:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:34:50 np0005535838 python3.9[104884]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:34:51 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 25 18:34:51 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 25 18:34:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v142: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Nov 25 18:34:52 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 25 18:34:52 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 25 18:34:53 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 25 18:34:53 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 25 18:34:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 48 B/s, 0 objects/s recovering
Nov 25 18:34:54 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 25 18:34:54 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 25 18:34:54 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 25 18:34:54 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 25 18:34:55 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 25 18:34:55 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 25 18:34:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:34:55
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['.mgr', 'volumes', 'vms', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'backups']
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v144: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 48 B/s, 0 objects/s recovering
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:34:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:34:56 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 25 18:34:56 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 25 18:34:56 np0005535838 podman[105128]: 2025-11-25 23:34:56.273922012 +0000 UTC m=+0.078201837 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:34:56 np0005535838 podman[105128]: 2025-11-25 23:34:56.429861869 +0000 UTC m=+0.234141634 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Nov 25 18:34:56 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 25 18:34:56 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:34:57 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 0b1cd114-9a7c-4f30-98a6-0c20fd854f0a does not exist
Nov 25 18:34:57 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 54c11424-98eb-44a0-8721-7fe25d7c96c0 does not exist
Nov 25 18:34:57 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 6f792355-2392-407c-9ef3-3e440e441a57 does not exist
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:34:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:34:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 48 B/s, 0 objects/s recovering
Nov 25 18:34:58 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:34:58 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:34:58 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:34:58 np0005535838 podman[105540]: 2025-11-25 23:34:58.448322417 +0000 UTC m=+0.046757085 container create 8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:34:58 np0005535838 systemd[77281]: Starting Mark boot as successful...
Nov 25 18:34:58 np0005535838 systemd[77281]: Finished Mark boot as successful.
Nov 25 18:34:58 np0005535838 systemd[1]: Started libpod-conmon-8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd.scope.
Nov 25 18:34:58 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:34:58 np0005535838 podman[105540]: 2025-11-25 23:34:58.513878002 +0000 UTC m=+0.112312700 container init 8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:34:58 np0005535838 podman[105540]: 2025-11-25 23:34:58.423630105 +0000 UTC m=+0.022064833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:34:58 np0005535838 podman[105540]: 2025-11-25 23:34:58.520256814 +0000 UTC m=+0.118691512 container start 8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 18:34:58 np0005535838 inspiring_heyrovsky[105558]: 167 167
Nov 25 18:34:58 np0005535838 systemd[1]: libpod-8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd.scope: Deactivated successfully.
Nov 25 18:34:58 np0005535838 conmon[105558]: conmon 8418778d9cb54eb8d11a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd.scope/container/memory.events
Nov 25 18:34:58 np0005535838 podman[105540]: 2025-11-25 23:34:58.526347826 +0000 UTC m=+0.124782504 container attach 8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 25 18:34:58 np0005535838 podman[105540]: 2025-11-25 23:34:58.526623195 +0000 UTC m=+0.125057843 container died 8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 18:34:58 np0005535838 systemd[1]: var-lib-containers-storage-overlay-c75270cd039dd80ed94ee712bd78803d7773915d22a9db77e2dbffe771a45565-merged.mount: Deactivated successfully.
Nov 25 18:34:58 np0005535838 podman[105540]: 2025-11-25 23:34:58.559824393 +0000 UTC m=+0.158259051 container remove 8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:34:58 np0005535838 systemd[1]: libpod-conmon-8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd.scope: Deactivated successfully.
Nov 25 18:34:58 np0005535838 podman[105581]: 2025-11-25 23:34:58.75751314 +0000 UTC m=+0.060121982 container create ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:34:58 np0005535838 systemd[1]: Started libpod-conmon-ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f.scope.
Nov 25 18:34:58 np0005535838 podman[105581]: 2025-11-25 23:34:58.732275053 +0000 UTC m=+0.034883865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:34:58 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:34:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6235981576a35f4ebc7f903a5f56ec4f00393e6380e7a2e2d7bc0ddf1950092a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:34:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6235981576a35f4ebc7f903a5f56ec4f00393e6380e7a2e2d7bc0ddf1950092a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:34:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6235981576a35f4ebc7f903a5f56ec4f00393e6380e7a2e2d7bc0ddf1950092a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:34:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6235981576a35f4ebc7f903a5f56ec4f00393e6380e7a2e2d7bc0ddf1950092a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:34:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6235981576a35f4ebc7f903a5f56ec4f00393e6380e7a2e2d7bc0ddf1950092a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:34:58 np0005535838 podman[105581]: 2025-11-25 23:34:58.877842603 +0000 UTC m=+0.180451475 container init ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:34:58 np0005535838 podman[105581]: 2025-11-25 23:34:58.886278589 +0000 UTC m=+0.188887391 container start ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 18:34:58 np0005535838 podman[105581]: 2025-11-25 23:34:58.889688651 +0000 UTC m=+0.192297493 container attach ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:34:59 np0005535838 quirky_matsumoto[105597]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:34:59 np0005535838 quirky_matsumoto[105597]: --> relative data size: 1.0
Nov 25 18:34:59 np0005535838 quirky_matsumoto[105597]: --> All data devices are unavailable
Nov 25 18:34:59 np0005535838 systemd[1]: libpod-ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f.scope: Deactivated successfully.
Nov 25 18:34:59 np0005535838 podman[105581]: 2025-11-25 23:34:59.895427126 +0000 UTC m=+1.198035958 container died ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_matsumoto, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 25 18:34:59 np0005535838 systemd[1]: var-lib-containers-storage-overlay-6235981576a35f4ebc7f903a5f56ec4f00393e6380e7a2e2d7bc0ddf1950092a-merged.mount: Deactivated successfully.
Nov 25 18:34:59 np0005535838 podman[105581]: 2025-11-25 23:34:59.958629119 +0000 UTC m=+1.261237931 container remove ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 18:34:59 np0005535838 systemd[1]: libpod-conmon-ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f.scope: Deactivated successfully.
Nov 25 18:35:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:35:00 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 25 18:35:00 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 25 18:35:00 np0005535838 podman[105775]: 2025-11-25 23:35:00.612162679 +0000 UTC m=+0.048643265 container create aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cartwright, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:35:00 np0005535838 systemd[1]: Started libpod-conmon-aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc.scope.
Nov 25 18:35:00 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:35:00 np0005535838 podman[105775]: 2025-11-25 23:35:00.591724061 +0000 UTC m=+0.028204747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:35:00 np0005535838 podman[105775]: 2025-11-25 23:35:00.697567497 +0000 UTC m=+0.134048163 container init aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cartwright, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 18:35:00 np0005535838 podman[105775]: 2025-11-25 23:35:00.704930404 +0000 UTC m=+0.141410990 container start aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cartwright, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:35:00 np0005535838 podman[105775]: 2025-11-25 23:35:00.708543501 +0000 UTC m=+0.145024087 container attach aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cartwright, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 18:35:00 np0005535838 nifty_cartwright[105792]: 167 167
Nov 25 18:35:00 np0005535838 systemd[1]: libpod-aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc.scope: Deactivated successfully.
Nov 25 18:35:00 np0005535838 podman[105775]: 2025-11-25 23:35:00.71188753 +0000 UTC m=+0.148368126 container died aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cartwright, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:35:00 np0005535838 systemd[1]: var-lib-containers-storage-overlay-a1ce747fbd83d0beb939caa1c03324c8a642f5080d14399fa8aae36d9c2e5f79-merged.mount: Deactivated successfully.
Nov 25 18:35:00 np0005535838 podman[105775]: 2025-11-25 23:35:00.750150206 +0000 UTC m=+0.186630792 container remove aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 18:35:00 np0005535838 systemd[1]: libpod-conmon-aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc.scope: Deactivated successfully.
Nov 25 18:35:00 np0005535838 podman[105817]: 2025-11-25 23:35:00.926376686 +0000 UTC m=+0.064353035 container create 2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 18:35:00 np0005535838 systemd[1]: Started libpod-conmon-2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637.scope.
Nov 25 18:35:00 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:35:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14db278946e792be15d916a8574410d74fedda95cb5b959c59433116faa1fcaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:35:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14db278946e792be15d916a8574410d74fedda95cb5b959c59433116faa1fcaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:35:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14db278946e792be15d916a8574410d74fedda95cb5b959c59433116faa1fcaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:35:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14db278946e792be15d916a8574410d74fedda95cb5b959c59433116faa1fcaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:35:00 np0005535838 podman[105817]: 2025-11-25 23:35:00.902395304 +0000 UTC m=+0.040371643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:35:01 np0005535838 podman[105817]: 2025-11-25 23:35:01.01270471 +0000 UTC m=+0.150681079 container init 2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cori, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:35:01 np0005535838 podman[105817]: 2025-11-25 23:35:01.021273519 +0000 UTC m=+0.159249848 container start 2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:35:01 np0005535838 podman[105817]: 2025-11-25 23:35:01.024777533 +0000 UTC m=+0.162753892 container attach 2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cori, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:35:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:35:01 np0005535838 priceless_cori[105837]: {
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:    "0": [
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:        {
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "devices": [
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "/dev/loop3"
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            ],
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_name": "ceph_lv0",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_size": "21470642176",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "name": "ceph_lv0",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "tags": {
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.cluster_name": "ceph",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.crush_device_class": "",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.encrypted": "0",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.osd_id": "0",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.type": "block",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.vdo": "0"
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            },
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "type": "block",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "vg_name": "ceph_vg0"
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:        }
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:    ],
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:    "1": [
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:        {
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "devices": [
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "/dev/loop4"
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            ],
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_name": "ceph_lv1",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_size": "21470642176",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "name": "ceph_lv1",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "tags": {
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.cluster_name": "ceph",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.crush_device_class": "",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.encrypted": "0",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.osd_id": "1",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.type": "block",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.vdo": "0"
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            },
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "type": "block",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "vg_name": "ceph_vg1"
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:        }
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:    ],
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:    "2": [
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:        {
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "devices": [
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "/dev/loop5"
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            ],
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_name": "ceph_lv2",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_size": "21470642176",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "name": "ceph_lv2",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "tags": {
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.cluster_name": "ceph",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.crush_device_class": "",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.encrypted": "0",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.osd_id": "2",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.type": "block",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:                "ceph.vdo": "0"
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            },
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "type": "block",
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:            "vg_name": "ceph_vg2"
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:        }
Nov 25 18:35:01 np0005535838 priceless_cori[105837]:    ]
Nov 25 18:35:01 np0005535838 priceless_cori[105837]: }
Nov 25 18:35:01 np0005535838 systemd[1]: libpod-2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637.scope: Deactivated successfully.
Nov 25 18:35:01 np0005535838 conmon[105837]: conmon 2580ff58d3d606d6d72c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637.scope/container/memory.events
Nov 25 18:35:01 np0005535838 podman[105817]: 2025-11-25 23:35:01.785224517 +0000 UTC m=+0.923200856 container died 2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cori, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 18:35:01 np0005535838 systemd[1]: var-lib-containers-storage-overlay-14db278946e792be15d916a8574410d74fedda95cb5b959c59433116faa1fcaa-merged.mount: Deactivated successfully.
Nov 25 18:35:01 np0005535838 podman[105817]: 2025-11-25 23:35:01.844271368 +0000 UTC m=+0.982247677 container remove 2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cori, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 25 18:35:01 np0005535838 systemd[1]: libpod-conmon-2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637.scope: Deactivated successfully.
Nov 25 18:35:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:02 np0005535838 podman[106001]: 2025-11-25 23:35:02.612438449 +0000 UTC m=+0.056631139 container create b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:35:02 np0005535838 systemd[1]: Started libpod-conmon-b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf.scope.
Nov 25 18:35:02 np0005535838 podman[106001]: 2025-11-25 23:35:02.585783705 +0000 UTC m=+0.029976445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:35:02 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:35:02 np0005535838 podman[106001]: 2025-11-25 23:35:02.72262306 +0000 UTC m=+0.166815810 container init b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:35:02 np0005535838 podman[106001]: 2025-11-25 23:35:02.733091921 +0000 UTC m=+0.177284601 container start b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 18:35:02 np0005535838 podman[106001]: 2025-11-25 23:35:02.737316075 +0000 UTC m=+0.181508765 container attach b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 18:35:02 np0005535838 keen_gagarin[106018]: 167 167
Nov 25 18:35:02 np0005535838 systemd[1]: libpod-b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf.scope: Deactivated successfully.
Nov 25 18:35:02 np0005535838 podman[106001]: 2025-11-25 23:35:02.743771907 +0000 UTC m=+0.187964597 container died b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:35:02 np0005535838 systemd[1]: var-lib-containers-storage-overlay-4c28586afa73800240d7230627d9c214f9e6f9c5cad4f2d9439ff75d93eee594-merged.mount: Deactivated successfully.
Nov 25 18:35:02 np0005535838 podman[106001]: 2025-11-25 23:35:02.796291584 +0000 UTC m=+0.240484284 container remove b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 18:35:02 np0005535838 systemd[1]: libpod-conmon-b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf.scope: Deactivated successfully.
Nov 25 18:35:03 np0005535838 podman[106043]: 2025-11-25 23:35:03.042648155 +0000 UTC m=+0.075916265 container create 55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 18:35:03 np0005535838 systemd[1]: Started libpod-conmon-55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2.scope.
Nov 25 18:35:03 np0005535838 podman[106043]: 2025-11-25 23:35:03.012904718 +0000 UTC m=+0.046172888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:35:03 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:35:03 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a4b7c9147076c25452bfe891b65754241574246277f94410d848c1a8f429ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:35:03 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a4b7c9147076c25452bfe891b65754241574246277f94410d848c1a8f429ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:35:03 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a4b7c9147076c25452bfe891b65754241574246277f94410d848c1a8f429ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:35:03 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a4b7c9147076c25452bfe891b65754241574246277f94410d848c1a8f429ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:35:03 np0005535838 podman[106043]: 2025-11-25 23:35:03.187855725 +0000 UTC m=+0.221123905 container init 55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 18:35:03 np0005535838 podman[106043]: 2025-11-25 23:35:03.201427828 +0000 UTC m=+0.234695948 container start 55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:35:03 np0005535838 podman[106043]: 2025-11-25 23:35:03.205566889 +0000 UTC m=+0.238834999 container attach 55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:35:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:04 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 25 18:35:04 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]: {
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "osd_id": 2,
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "type": "bluestore"
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:    },
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "osd_id": 1,
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "type": "bluestore"
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:    },
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "osd_id": 0,
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:        "type": "bluestore"
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]:    }
Nov 25 18:35:04 np0005535838 naughty_thompson[106059]: }
Nov 25 18:35:04 np0005535838 systemd[1]: libpod-55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2.scope: Deactivated successfully.
Nov 25 18:35:04 np0005535838 podman[106043]: 2025-11-25 23:35:04.255087508 +0000 UTC m=+1.288355598 container died 55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 18:35:04 np0005535838 systemd[1]: libpod-55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2.scope: Consumed 1.064s CPU time.
Nov 25 18:35:04 np0005535838 systemd[1]: var-lib-containers-storage-overlay-10a4b7c9147076c25452bfe891b65754241574246277f94410d848c1a8f429ca-merged.mount: Deactivated successfully.
Nov 25 18:35:04 np0005535838 podman[106043]: 2025-11-25 23:35:04.313791081 +0000 UTC m=+1.347059171 container remove 55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:35:04 np0005535838 systemd[1]: libpod-conmon-55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2.scope: Deactivated successfully.
Nov 25 18:35:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:35:04 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:35:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:35:04 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:35:04 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev ec9b7432-ea76-4cdd-a465-d453686928b7 does not exist
Nov 25 18:35:05 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 25 18:35:05 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 25 18:35:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:35:05 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:35:05 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:35:05 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 25 18:35:05 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 25 18:35:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:06 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Nov 25 18:35:06 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Nov 25 18:35:07 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 25 18:35:07 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 25 18:35:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:08 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 25 18:35:08 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 25 18:35:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:35:10 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 25 18:35:10 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 25 18:35:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:12 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 25 18:35:12 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 25 18:35:12 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 25 18:35:12 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 25 18:35:13 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 25 18:35:13 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 25 18:35:14 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 25 18:35:14 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 25 18:35:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:15 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 25 18:35:15 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 25 18:35:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:35:16 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Nov 25 18:35:16 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Nov 25 18:35:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:18 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 25 18:35:18 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 25 18:35:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:35:20 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 25 18:35:20 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 25 18:35:21 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 25 18:35:21 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 25 18:35:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v157: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:24 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 25 18:35:24 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 25 18:35:24 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 25 18:35:24 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 25 18:35:25 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 25 18:35:25 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 25 18:35:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:35:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v159: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:35:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:35:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:35:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:35:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:35:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:35:26 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 25 18:35:26 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 25 18:35:26 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 25 18:35:26 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 25 18:35:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:28 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 25 18:35:28 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 25 18:35:29 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 25 18:35:29 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 25 18:35:29 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 25 18:35:29 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 25 18:35:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:30 np0005535838 python3.9[106380]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:35:30 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 25 18:35:30 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 25 18:35:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:35:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 25 18:35:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 25 18:35:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v162: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:32 np0005535838 python3.9[106667]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 18:35:32 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 25 18:35:32 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 25 18:35:33 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 25 18:35:33 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 25 18:35:33 np0005535838 python3.9[106819]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 18:35:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:34 np0005535838 python3.9[106971]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:35:35 np0005535838 python3.9[107123]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 18:35:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:35:35 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.9 deep-scrub starts
Nov 25 18:35:35 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.9 deep-scrub ok
Nov 25 18:35:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:36 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 25 18:35:36 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 25 18:35:36 np0005535838 python3.9[107275]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:35:36 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 25 18:35:36 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 25 18:35:37 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Nov 25 18:35:37 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Nov 25 18:35:37 np0005535838 python3.9[107427]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:35:38 np0005535838 python3.9[107505]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:35:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:38 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 25 18:35:38 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 25 18:35:39 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Nov 25 18:35:39 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Nov 25 18:35:39 np0005535838 python3.9[107657]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:35:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 25 18:35:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 25 18:35:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:35:40 np0005535838 python3.9[107811]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 18:35:41 np0005535838 python3.9[107964]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 18:35:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:42 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 25 18:35:42 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 25 18:35:42 np0005535838 python3.9[108117]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 18:35:43 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 25 18:35:43 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 25 18:35:43 np0005535838 python3.9[108269]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 18:35:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v168: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:44 np0005535838 python3.9[108421]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:35:44 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 25 18:35:44 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 25 18:35:44 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 25 18:35:44 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 25 18:35:45 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 25 18:35:45 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 25 18:35:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:35:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:46 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 25 18:35:46 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 25 18:35:46 np0005535838 python3.9[108576]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:35:47 np0005535838 python3.9[108728]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:35:47 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 25 18:35:47 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 25 18:35:47 np0005535838 python3.9[108808]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:35:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:48 np0005535838 python3.9[108960]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:35:49 np0005535838 python3.9[109038]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:35:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v171: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:35:50 np0005535838 python3.9[109190]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:35:51 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 25 18:35:51 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 25 18:35:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:52 np0005535838 python3.9[109341]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:35:53 np0005535838 python3.9[109495]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 18:35:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v173: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:54 np0005535838 python3.9[109645]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:35:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:35:55 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Nov 25 18:35:55 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Nov 25 18:35:55 np0005535838 python3.9[109797]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:35:55 np0005535838 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 18:35:55 np0005535838 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 18:35:55 np0005535838 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 18:35:55 np0005535838 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:35:56
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'vms', 'images']
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v174: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:35:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:35:56 np0005535838 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 18:35:56 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 25 18:35:56 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 25 18:35:56 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 25 18:35:56 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 25 18:35:57 np0005535838 python3.9[109959]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 18:35:57 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 25 18:35:57 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 25 18:35:57 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 25 18:35:57 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 25 18:35:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:35:58 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 25 18:35:58 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 25 18:35:59 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Nov 25 18:35:59 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Nov 25 18:35:59 np0005535838 python3.9[110111]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:36:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v176: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:00 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 25 18:36:00 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 25 18:36:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:36:00 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 25 18:36:00 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 25 18:36:00 np0005535838 python3.9[110265]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:36:01 np0005535838 systemd[1]: session-35.scope: Deactivated successfully.
Nov 25 18:36:01 np0005535838 systemd[1]: session-35.scope: Consumed 1min 6.656s CPU time.
Nov 25 18:36:01 np0005535838 systemd-logind[789]: Session 35 logged out. Waiting for processes to exit.
Nov 25 18:36:01 np0005535838 systemd-logind[789]: Removed session 35.
Nov 25 18:36:01 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 25 18:36:01 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:36:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:36:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:02 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 25 18:36:02 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 25 18:36:03 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 25 18:36:03 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 25 18:36:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:04 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts
Nov 25 18:36:04 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok
Nov 25 18:36:04 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 25 18:36:04 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 25 18:36:05 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.15 deep-scrub starts
Nov 25 18:36:05 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.15 deep-scrub ok
Nov 25 18:36:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:36:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:36:05 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:36:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:36:05 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:36:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:36:05 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:36:05 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 9f554c3b-47e2-4a5f-b767-c65c437e5e4b does not exist
Nov 25 18:36:05 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 47e23d0d-6715-4914-ab8e-93815f9b9ee1 does not exist
Nov 25 18:36:05 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 11febb9f-6c7e-4ca9-bfca-045a47236623 does not exist
Nov 25 18:36:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:36:05 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:36:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:36:05 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:36:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:36:05 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:36:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:06 np0005535838 podman[110563]: 2025-11-25 23:36:06.12989105 +0000 UTC m=+0.074392931 container create bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shirley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 25 18:36:06 np0005535838 systemd[1]: Started libpod-conmon-bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb.scope.
Nov 25 18:36:06 np0005535838 podman[110563]: 2025-11-25 23:36:06.099081907 +0000 UTC m=+0.043583838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:36:06 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:36:06 np0005535838 podman[110563]: 2025-11-25 23:36:06.239401798 +0000 UTC m=+0.183903709 container init bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shirley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:36:06 np0005535838 podman[110563]: 2025-11-25 23:36:06.252414035 +0000 UTC m=+0.196915917 container start bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 18:36:06 np0005535838 podman[110563]: 2025-11-25 23:36:06.257027229 +0000 UTC m=+0.201529160 container attach bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:36:06 np0005535838 exciting_shirley[110582]: 167 167
Nov 25 18:36:06 np0005535838 systemd[1]: libpod-bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb.scope: Deactivated successfully.
Nov 25 18:36:06 np0005535838 podman[110563]: 2025-11-25 23:36:06.262785072 +0000 UTC m=+0.207286993 container died bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:36:06 np0005535838 systemd-logind[789]: New session 36 of user zuul.
Nov 25 18:36:06 np0005535838 systemd[1]: Started Session 36 of User zuul.
Nov 25 18:36:06 np0005535838 systemd[1]: var-lib-containers-storage-overlay-81640ac529e1c138cda3fcd1ab721c959a39bbb0aeb001e5b16ada2b81414c9f-merged.mount: Deactivated successfully.
Nov 25 18:36:06 np0005535838 podman[110563]: 2025-11-25 23:36:06.324336098 +0000 UTC m=+0.268837979 container remove bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shirley, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 18:36:06 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:36:06 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:36:06 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:36:06 np0005535838 systemd[1]: libpod-conmon-bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb.scope: Deactivated successfully.
Nov 25 18:36:06 np0005535838 podman[110647]: 2025-11-25 23:36:06.548633906 +0000 UTC m=+0.066696965 container create 299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 18:36:06 np0005535838 systemd[1]: Started libpod-conmon-299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc.scope.
Nov 25 18:36:06 np0005535838 podman[110647]: 2025-11-25 23:36:06.521613913 +0000 UTC m=+0.039677062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:36:06 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:36:06 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951fd33670c62c3697a272c7cb7f3bfc8f3e2f94043dc69d077097005cae1aee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:36:06 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951fd33670c62c3697a272c7cb7f3bfc8f3e2f94043dc69d077097005cae1aee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:36:06 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951fd33670c62c3697a272c7cb7f3bfc8f3e2f94043dc69d077097005cae1aee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:36:06 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951fd33670c62c3697a272c7cb7f3bfc8f3e2f94043dc69d077097005cae1aee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:36:06 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951fd33670c62c3697a272c7cb7f3bfc8f3e2f94043dc69d077097005cae1aee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:36:06 np0005535838 podman[110647]: 2025-11-25 23:36:06.666009643 +0000 UTC m=+0.184072772 container init 299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bose, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:36:06 np0005535838 podman[110647]: 2025-11-25 23:36:06.681774955 +0000 UTC m=+0.199838014 container start 299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:36:06 np0005535838 podman[110647]: 2025-11-25 23:36:06.685418832 +0000 UTC m=+0.203481911 container attach 299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bose, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:36:07 np0005535838 python3.9[110777]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:36:07 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 25 18:36:07 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 25 18:36:07 np0005535838 elegant_bose[110675]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:36:07 np0005535838 elegant_bose[110675]: --> relative data size: 1.0
Nov 25 18:36:07 np0005535838 elegant_bose[110675]: --> All data devices are unavailable
Nov 25 18:36:07 np0005535838 systemd[1]: libpod-299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc.scope: Deactivated successfully.
Nov 25 18:36:07 np0005535838 systemd[1]: libpod-299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc.scope: Consumed 1.035s CPU time.
Nov 25 18:36:07 np0005535838 podman[110647]: 2025-11-25 23:36:07.760942987 +0000 UTC m=+1.279006116 container died 299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:36:07 np0005535838 systemd[1]: var-lib-containers-storage-overlay-951fd33670c62c3697a272c7cb7f3bfc8f3e2f94043dc69d077097005cae1aee-merged.mount: Deactivated successfully.
Nov 25 18:36:07 np0005535838 podman[110647]: 2025-11-25 23:36:07.825657128 +0000 UTC m=+1.343720177 container remove 299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 18:36:07 np0005535838 systemd[1]: libpod-conmon-299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc.scope: Deactivated successfully.
Nov 25 18:36:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:08 np0005535838 podman[111074]: 2025-11-25 23:36:08.558747377 +0000 UTC m=+0.062829311 container create 030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 18:36:08 np0005535838 systemd[1]: Started libpod-conmon-030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf.scope.
Nov 25 18:36:08 np0005535838 podman[111074]: 2025-11-25 23:36:08.530515342 +0000 UTC m=+0.034597256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:36:08 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:36:08 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 25 18:36:08 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 25 18:36:08 np0005535838 podman[111074]: 2025-11-25 23:36:08.684952761 +0000 UTC m=+0.189034655 container init 030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:36:08 np0005535838 podman[111074]: 2025-11-25 23:36:08.70175679 +0000 UTC m=+0.205838684 container start 030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 18:36:08 np0005535838 podman[111074]: 2025-11-25 23:36:08.704601986 +0000 UTC m=+0.208683880 container attach 030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 18:36:08 np0005535838 nervous_brown[111125]: 167 167
Nov 25 18:36:08 np0005535838 systemd[1]: libpod-030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf.scope: Deactivated successfully.
Nov 25 18:36:08 np0005535838 podman[111074]: 2025-11-25 23:36:08.711371147 +0000 UTC m=+0.215453041 container died 030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:36:08 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b8978721606b9f609951ca55866adff388c1c246a7105b015b7a2c2de398c5c0-merged.mount: Deactivated successfully.
Nov 25 18:36:08 np0005535838 podman[111074]: 2025-11-25 23:36:08.756243517 +0000 UTC m=+0.260325411 container remove 030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:36:08 np0005535838 systemd[1]: libpod-conmon-030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf.scope: Deactivated successfully.
Nov 25 18:36:08 np0005535838 python3.9[111127]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 18:36:08 np0005535838 podman[111151]: 2025-11-25 23:36:08.937686939 +0000 UTC m=+0.047411659 container create 2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_swanson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:36:08 np0005535838 systemd[1]: Started libpod-conmon-2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe.scope.
Nov 25 18:36:09 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:36:09 np0005535838 podman[111151]: 2025-11-25 23:36:08.919848771 +0000 UTC m=+0.029573511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:36:09 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32282179f07503ef902c5ac0073ea7c86630365b39a5e685d16b8fc7c8e95c84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:36:09 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32282179f07503ef902c5ac0073ea7c86630365b39a5e685d16b8fc7c8e95c84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:36:09 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32282179f07503ef902c5ac0073ea7c86630365b39a5e685d16b8fc7c8e95c84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:36:09 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32282179f07503ef902c5ac0073ea7c86630365b39a5e685d16b8fc7c8e95c84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:36:09 np0005535838 podman[111151]: 2025-11-25 23:36:09.035054151 +0000 UTC m=+0.144778921 container init 2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_swanson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:36:09 np0005535838 podman[111151]: 2025-11-25 23:36:09.047323989 +0000 UTC m=+0.157048719 container start 2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_swanson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:36:09 np0005535838 podman[111151]: 2025-11-25 23:36:09.051426229 +0000 UTC m=+0.161151029 container attach 2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 18:36:09 np0005535838 brave_swanson[111192]: {
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:    "0": [
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:        {
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "devices": [
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "/dev/loop3"
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            ],
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_name": "ceph_lv0",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_size": "21470642176",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "name": "ceph_lv0",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "tags": {
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.cluster_name": "ceph",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.crush_device_class": "",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.encrypted": "0",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.osd_id": "0",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.type": "block",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.vdo": "0"
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            },
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "type": "block",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "vg_name": "ceph_vg0"
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:        }
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:    ],
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:    "1": [
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:        {
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "devices": [
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "/dev/loop4"
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            ],
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_name": "ceph_lv1",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_size": "21470642176",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "name": "ceph_lv1",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "tags": {
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.cluster_name": "ceph",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.crush_device_class": "",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.encrypted": "0",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.osd_id": "1",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.type": "block",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.vdo": "0"
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            },
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "type": "block",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "vg_name": "ceph_vg1"
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:        }
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:    ],
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:    "2": [
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:        {
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "devices": [
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "/dev/loop5"
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            ],
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_name": "ceph_lv2",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_size": "21470642176",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "name": "ceph_lv2",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "tags": {
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.cluster_name": "ceph",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.crush_device_class": "",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.encrypted": "0",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.osd_id": "2",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.type": "block",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:                "ceph.vdo": "0"
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            },
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "type": "block",
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:            "vg_name": "ceph_vg2"
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:        }
Nov 25 18:36:09 np0005535838 brave_swanson[111192]:    ]
Nov 25 18:36:09 np0005535838 brave_swanson[111192]: }
Nov 25 18:36:09 np0005535838 systemd[1]: libpod-2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe.scope: Deactivated successfully.
Nov 25 18:36:09 np0005535838 podman[111151]: 2025-11-25 23:36:09.851094059 +0000 UTC m=+0.960818819 container died 2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 18:36:09 np0005535838 systemd[1]: var-lib-containers-storage-overlay-32282179f07503ef902c5ac0073ea7c86630365b39a5e685d16b8fc7c8e95c84-merged.mount: Deactivated successfully.
Nov 25 18:36:09 np0005535838 podman[111151]: 2025-11-25 23:36:09.91845659 +0000 UTC m=+1.028181320 container remove 2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_swanson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:36:09 np0005535838 systemd[1]: libpod-conmon-2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe.scope: Deactivated successfully.
Nov 25 18:36:10 np0005535838 python3.9[111326]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:36:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v181: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:36:10 np0005535838 podman[111514]: 2025-11-25 23:36:10.620833029 +0000 UTC m=+0.039363754 container create 956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 18:36:10 np0005535838 systemd[1]: Started libpod-conmon-956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa.scope.
Nov 25 18:36:10 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:36:10 np0005535838 podman[111514]: 2025-11-25 23:36:10.606105526 +0000 UTC m=+0.024636281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:36:10 np0005535838 podman[111514]: 2025-11-25 23:36:10.703316424 +0000 UTC m=+0.121847179 container init 956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 18:36:10 np0005535838 podman[111514]: 2025-11-25 23:36:10.713056615 +0000 UTC m=+0.131587350 container start 956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:36:10 np0005535838 podman[111514]: 2025-11-25 23:36:10.716659371 +0000 UTC m=+0.135190126 container attach 956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:36:10 np0005535838 flamboyant_bhabha[111554]: 167 167
Nov 25 18:36:10 np0005535838 systemd[1]: libpod-956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa.scope: Deactivated successfully.
Nov 25 18:36:10 np0005535838 podman[111514]: 2025-11-25 23:36:10.719963169 +0000 UTC m=+0.138493904 container died 956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:36:10 np0005535838 systemd[1]: var-lib-containers-storage-overlay-fa7984a19737275d72c7e17b346e6df7d5a34dfb1354fdd9924fb5bc084b83a8-merged.mount: Deactivated successfully.
Nov 25 18:36:10 np0005535838 podman[111514]: 2025-11-25 23:36:10.7697443 +0000 UTC m=+0.188275055 container remove 956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:36:10 np0005535838 systemd[1]: libpod-conmon-956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa.scope: Deactivated successfully.
Nov 25 18:36:10 np0005535838 python3.9[111594]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 18:36:11 np0005535838 podman[111605]: 2025-11-25 23:36:11.015043219 +0000 UTC m=+0.068697468 container create 7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 18:36:11 np0005535838 systemd[1]: Started libpod-conmon-7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd.scope.
Nov 25 18:36:11 np0005535838 podman[111605]: 2025-11-25 23:36:10.986774072 +0000 UTC m=+0.040428321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:36:11 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:36:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af4b0a6ad317c2178872bd8b00736cd882932d8a4ada225071289117c649a00f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:36:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af4b0a6ad317c2178872bd8b00736cd882932d8a4ada225071289117c649a00f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:36:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af4b0a6ad317c2178872bd8b00736cd882932d8a4ada225071289117c649a00f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:36:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af4b0a6ad317c2178872bd8b00736cd882932d8a4ada225071289117c649a00f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:36:11 np0005535838 podman[111605]: 2025-11-25 23:36:11.138103399 +0000 UTC m=+0.191757698 container init 7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:36:11 np0005535838 podman[111605]: 2025-11-25 23:36:11.150058578 +0000 UTC m=+0.203712817 container start 7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 18:36:11 np0005535838 podman[111605]: 2025-11-25 23:36:11.154336472 +0000 UTC m=+0.207990781 container attach 7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:36:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v182: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]: {
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "osd_id": 2,
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "type": "bluestore"
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:    },
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "osd_id": 1,
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "type": "bluestore"
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:    },
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "osd_id": 0,
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:        "type": "bluestore"
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]:    }
Nov 25 18:36:12 np0005535838 stoic_haibt[111623]: }
Nov 25 18:36:12 np0005535838 systemd[1]: libpod-7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd.scope: Deactivated successfully.
Nov 25 18:36:12 np0005535838 systemd[1]: libpod-7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd.scope: Consumed 1.038s CPU time.
Nov 25 18:36:12 np0005535838 podman[111605]: 2025-11-25 23:36:12.181162445 +0000 UTC m=+1.234816694 container died 7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 18:36:12 np0005535838 systemd[1]: var-lib-containers-storage-overlay-af4b0a6ad317c2178872bd8b00736cd882932d8a4ada225071289117c649a00f-merged.mount: Deactivated successfully.
Nov 25 18:36:12 np0005535838 podman[111605]: 2025-11-25 23:36:12.257800684 +0000 UTC m=+1.311454933 container remove 7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:36:12 np0005535838 systemd[1]: libpod-conmon-7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd.scope: Deactivated successfully.
Nov 25 18:36:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:36:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:36:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:36:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:36:12 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 3492ecdb-c12f-404e-b450-adda31964ea3 does not exist
Nov 25 18:36:13 np0005535838 python3.9[111869]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:36:13 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:36:13 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:36:13 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 25 18:36:13 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 25 18:36:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:14 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Nov 25 18:36:14 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Nov 25 18:36:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:36:15 np0005535838 python3.9[112022]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 18:36:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v184: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:16 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 25 18:36:16 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 25 18:36:16 np0005535838 python3.9[112177]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:36:17 np0005535838 python3.9[112329]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 18:36:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:18 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 25 18:36:18 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 25 18:36:18 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 25 18:36:18 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 25 18:36:18 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 25 18:36:18 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 25 18:36:18 np0005535838 python3.9[112479]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:36:19 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 25 18:36:19 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 25 18:36:20 np0005535838 python3.9[112637]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:36:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:20 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 25 18:36:20 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 25 18:36:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:36:20 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 25 18:36:20 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 25 18:36:21 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.8 deep-scrub starts
Nov 25 18:36:21 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 25 18:36:21 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.8 deep-scrub ok
Nov 25 18:36:21 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 25 18:36:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:22 np0005535838 python3.9[112791]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:36:22 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Nov 25 18:36:22 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Nov 25 18:36:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:24 np0005535838 python3.9[113078]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 18:36:25 np0005535838 python3.9[113228]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:36:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:36:25 np0005535838 python3.9[113382]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:36:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:36:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:36:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:36:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:36:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:36:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:36:27 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 25 18:36:27 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 25 18:36:27 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 25 18:36:27 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 25 18:36:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v190: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:28 np0005535838 python3.9[113535]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:36:29 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 25 18:36:29 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 25 18:36:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v191: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:36:30 np0005535838 python3.9[113688]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:36:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 25 18:36:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 25 18:36:31 np0005535838 python3.9[113842]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 25 18:36:31 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 25 18:36:31 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 25 18:36:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:32 np0005535838 systemd-logind[789]: Session 36 logged out. Waiting for processes to exit.
Nov 25 18:36:32 np0005535838 systemd[1]: session-36.scope: Deactivated successfully.
Nov 25 18:36:32 np0005535838 systemd[1]: session-36.scope: Consumed 19.476s CPU time.
Nov 25 18:36:32 np0005535838 systemd-logind[789]: Removed session 36.
Nov 25 18:36:32 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 25 18:36:32 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 25 18:36:33 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 25 18:36:33 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 25 18:36:33 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 25 18:36:33 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 25 18:36:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v193: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:36:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v194: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:36 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.16 deep-scrub starts
Nov 25 18:36:36 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.16 deep-scrub ok
Nov 25 18:36:37 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 25 18:36:37 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 25 18:36:37 np0005535838 systemd-logind[789]: New session 37 of user zuul.
Nov 25 18:36:37 np0005535838 systemd[1]: Started Session 37 of User zuul.
Nov 25 18:36:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v195: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:38 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 25 18:36:38 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 25 18:36:39 np0005535838 python3.9[114020]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:36:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Nov 25 18:36:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Nov 25 18:36:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v196: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:40 np0005535838 python3.9[114174]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:36:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:36:41 np0005535838 python3.9[114367]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:36:41 np0005535838 systemd-logind[789]: Session 37 logged out. Waiting for processes to exit.
Nov 25 18:36:41 np0005535838 systemd[1]: session-37.scope: Deactivated successfully.
Nov 25 18:36:41 np0005535838 systemd[1]: session-37.scope: Consumed 2.858s CPU time.
Nov 25 18:36:41 np0005535838 systemd-logind[789]: Removed session 37.
Nov 25 18:36:41 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 25 18:36:41 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 25 18:36:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:42 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 25 18:36:42 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 25 18:36:43 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.7 deep-scrub starts
Nov 25 18:36:43 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.7 deep-scrub ok
Nov 25 18:36:43 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 25 18:36:43 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 25 18:36:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v198: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:36:45 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 25 18:36:45 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 25 18:36:45 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 25 18:36:45 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 25 18:36:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:47 np0005535838 systemd-logind[789]: New session 38 of user zuul.
Nov 25 18:36:47 np0005535838 systemd[1]: Started Session 38 of User zuul.
Nov 25 18:36:47 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Nov 25 18:36:47 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Nov 25 18:36:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v200: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:48 np0005535838 python3.9[114546]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:36:49 np0005535838 python3.9[114700]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:36:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:36:50 np0005535838 python3.9[114856]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:36:51 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 25 18:36:51 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 25 18:36:51 np0005535838 python3.9[114940]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:36:51 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 25 18:36:51 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 25 18:36:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v202: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:53 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 25 18:36:53 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 25 18:36:53 np0005535838 python3.9[115093]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:36:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v203: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:54 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Nov 25 18:36:54 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Nov 25 18:36:54 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 25 18:36:54 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 25 18:36:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:36:55 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 25 18:36:55 np0005535838 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 25 18:36:55 np0005535838 python3.9[115288]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:36:56
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'vms', 'images']
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:36:56 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 25 18:36:56 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:36:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:56 np0005535838 python3.9[115440]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:36:57 np0005535838 python3.9[115606]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:36:57 np0005535838 python3.9[115684]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:36:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v205: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:36:58 np0005535838 python3.9[115836]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:36:59 np0005535838 python3.9[115914]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:37:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v206: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:37:00 np0005535838 python3.9[116066]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:37:01 np0005535838 python3.9[116218]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:37:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:37:01 np0005535838 python3.9[116370]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:37:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:02 np0005535838 python3.9[116522]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:37:03 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 25 18:37:03 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 25 18:37:03 np0005535838 python3.9[116674]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:37:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:37:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v209: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:06 np0005535838 python3.9[116829]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:37:06 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 25 18:37:06 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 25 18:37:06 np0005535838 python3.9[116983]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:37:07 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1a deep-scrub starts
Nov 25 18:37:07 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1a deep-scrub ok
Nov 25 18:37:07 np0005535838 python3.9[117135]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:37:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:08 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 25 18:37:08 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 25 18:37:08 np0005535838 python3.9[117287]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:37:09 np0005535838 python3.9[117440]: ansible-service_facts Invoked
Nov 25 18:37:09 np0005535838 network[117457]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 18:37:09 np0005535838 network[117458]: 'network-scripts' will be removed from distribution in near future.
Nov 25 18:37:09 np0005535838 network[117459]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 18:37:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:37:10 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 25 18:37:10 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 25 18:37:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:37:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:37:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:37:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:37:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:37:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:37:13 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 220bf64d-8508-4af9-b585-0deccf424ab5 does not exist
Nov 25 18:37:13 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 6c6add32-be32-4aa1-b6a0-04fbe6fbf5b5 does not exist
Nov 25 18:37:13 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 272ebab4-8c82-4f61-9e92-d5d8d23e77fb does not exist
Nov 25 18:37:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:37:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:37:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:37:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:37:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:37:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:37:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:14 np0005535838 podman[118109]: 2025-11-25 23:37:14.145906122 +0000 UTC m=+0.072770125 container create 7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:37:14 np0005535838 systemd[77281]: Created slice User Background Tasks Slice.
Nov 25 18:37:14 np0005535838 systemd[77281]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 18:37:14 np0005535838 systemd[1]: Started libpod-conmon-7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b.scope.
Nov 25 18:37:14 np0005535838 systemd[77281]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 18:37:14 np0005535838 podman[118109]: 2025-11-25 23:37:14.108914008 +0000 UTC m=+0.035778021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:37:14 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:37:14 np0005535838 podman[118109]: 2025-11-25 23:37:14.253628294 +0000 UTC m=+0.180492297 container init 7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:37:14 np0005535838 podman[118109]: 2025-11-25 23:37:14.262216888 +0000 UTC m=+0.189080901 container start 7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 18:37:14 np0005535838 podman[118109]: 2025-11-25 23:37:14.265881967 +0000 UTC m=+0.192745950 container attach 7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 18:37:14 np0005535838 magical_mayer[118157]: 167 167
Nov 25 18:37:14 np0005535838 systemd[1]: libpod-7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b.scope: Deactivated successfully.
Nov 25 18:37:14 np0005535838 podman[118109]: 2025-11-25 23:37:14.27339699 +0000 UTC m=+0.200260963 container died 7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 18:37:14 np0005535838 systemd[1]: var-lib-containers-storage-overlay-227bea80e4be7c41e1a7779465f66e1efff767ff5c79b4b0879db2b6cbe72cc7-merged.mount: Deactivated successfully.
Nov 25 18:37:14 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:37:14 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:37:14 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:37:14 np0005535838 podman[118109]: 2025-11-25 23:37:14.324890817 +0000 UTC m=+0.251754780 container remove 7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 25 18:37:14 np0005535838 systemd[1]: libpod-conmon-7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b.scope: Deactivated successfully.
Nov 25 18:37:14 np0005535838 podman[118225]: 2025-11-25 23:37:14.481240279 +0000 UTC m=+0.044386086 container create a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:37:14 np0005535838 systemd[1]: Started libpod-conmon-a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826.scope.
Nov 25 18:37:14 np0005535838 python3.9[118217]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:37:14 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:37:14 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89592a278bb2631252a9acc106f1e0c1d51ffafd5826d4c06b7f7d7054a333b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:37:14 np0005535838 podman[118225]: 2025-11-25 23:37:14.462691326 +0000 UTC m=+0.025837173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:37:14 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89592a278bb2631252a9acc106f1e0c1d51ffafd5826d4c06b7f7d7054a333b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:37:14 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89592a278bb2631252a9acc106f1e0c1d51ffafd5826d4c06b7f7d7054a333b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:37:14 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89592a278bb2631252a9acc106f1e0c1d51ffafd5826d4c06b7f7d7054a333b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:37:14 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89592a278bb2631252a9acc106f1e0c1d51ffafd5826d4c06b7f7d7054a333b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:37:14 np0005535838 podman[118225]: 2025-11-25 23:37:14.572952846 +0000 UTC m=+0.136098653 container init a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 25 18:37:14 np0005535838 podman[118225]: 2025-11-25 23:37:14.58968023 +0000 UTC m=+0.152826027 container start a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 18:37:14 np0005535838 podman[118225]: 2025-11-25 23:37:14.592848277 +0000 UTC m=+0.155994094 container attach a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:37:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:37:15 np0005535838 zen_liskov[118242]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:37:15 np0005535838 zen_liskov[118242]: --> relative data size: 1.0
Nov 25 18:37:15 np0005535838 zen_liskov[118242]: --> All data devices are unavailable
Nov 25 18:37:15 np0005535838 systemd[1]: libpod-a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826.scope: Deactivated successfully.
Nov 25 18:37:15 np0005535838 podman[118225]: 2025-11-25 23:37:15.583588383 +0000 UTC m=+1.146734210 container died a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 18:37:15 np0005535838 systemd[1]: var-lib-containers-storage-overlay-89592a278bb2631252a9acc106f1e0c1d51ffafd5826d4c06b7f7d7054a333b2-merged.mount: Deactivated successfully.
Nov 25 18:37:15 np0005535838 podman[118225]: 2025-11-25 23:37:15.631498562 +0000 UTC m=+1.194644359 container remove a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:37:15 np0005535838 systemd[1]: libpod-conmon-a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826.scope: Deactivated successfully.
Nov 25 18:37:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:16 np0005535838 podman[118499]: 2025-11-25 23:37:16.237800599 +0000 UTC m=+0.048013443 container create 30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 18:37:16 np0005535838 systemd[1]: Started libpod-conmon-30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd.scope.
Nov 25 18:37:16 np0005535838 podman[118499]: 2025-11-25 23:37:16.218410234 +0000 UTC m=+0.028623098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:37:16 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:37:16 np0005535838 podman[118499]: 2025-11-25 23:37:16.331580363 +0000 UTC m=+0.141793287 container init 30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:37:16 np0005535838 podman[118499]: 2025-11-25 23:37:16.339961851 +0000 UTC m=+0.150174695 container start 30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:37:16 np0005535838 podman[118499]: 2025-11-25 23:37:16.343146668 +0000 UTC m=+0.153359542 container attach 30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:37:16 np0005535838 fervent_banzai[118520]: 167 167
Nov 25 18:37:16 np0005535838 systemd[1]: libpod-30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd.scope: Deactivated successfully.
Nov 25 18:37:16 np0005535838 podman[118499]: 2025-11-25 23:37:16.348108172 +0000 UTC m=+0.158321056 container died 30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:37:16 np0005535838 systemd[1]: var-lib-containers-storage-overlay-33752ad957791cf824ad45e96ebac8bbc5c1182b9850d5d8eb6b4677bd055405-merged.mount: Deactivated successfully.
Nov 25 18:37:16 np0005535838 podman[118499]: 2025-11-25 23:37:16.402332923 +0000 UTC m=+0.212545807 container remove 30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:37:16 np0005535838 systemd[1]: libpod-conmon-30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd.scope: Deactivated successfully.
Nov 25 18:37:16 np0005535838 podman[118546]: 2025-11-25 23:37:16.584368401 +0000 UTC m=+0.040931101 container create bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 18:37:16 np0005535838 systemd[1]: Started libpod-conmon-bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2.scope.
Nov 25 18:37:16 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:37:16 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f5128b91b261dadf6b8b44ab457cac2c30df9e7796175236c845c4b7108797/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:37:16 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f5128b91b261dadf6b8b44ab457cac2c30df9e7796175236c845c4b7108797/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:37:16 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f5128b91b261dadf6b8b44ab457cac2c30df9e7796175236c845c4b7108797/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:37:16 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f5128b91b261dadf6b8b44ab457cac2c30df9e7796175236c845c4b7108797/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:37:16 np0005535838 podman[118546]: 2025-11-25 23:37:16.567767491 +0000 UTC m=+0.024330201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:37:16 np0005535838 podman[118546]: 2025-11-25 23:37:16.664727521 +0000 UTC m=+0.121290231 container init bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 18:37:16 np0005535838 podman[118546]: 2025-11-25 23:37:16.673412917 +0000 UTC m=+0.129975607 container start bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:37:16 np0005535838 podman[118546]: 2025-11-25 23:37:16.676394798 +0000 UTC m=+0.132957508 container attach bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:37:17 np0005535838 python3.9[118642]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]: {
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:    "0": [
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:        {
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "devices": [
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "/dev/loop3"
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            ],
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_name": "ceph_lv0",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_size": "21470642176",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "name": "ceph_lv0",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "tags": {
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.cluster_name": "ceph",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.crush_device_class": "",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.encrypted": "0",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.osd_id": "0",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.type": "block",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.vdo": "0"
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            },
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "type": "block",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "vg_name": "ceph_vg0"
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:        }
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:    ],
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:    "1": [
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:        {
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "devices": [
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "/dev/loop4"
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            ],
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_name": "ceph_lv1",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_size": "21470642176",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "name": "ceph_lv1",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "tags": {
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.cluster_name": "ceph",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.crush_device_class": "",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.encrypted": "0",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.osd_id": "1",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.type": "block",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.vdo": "0"
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            },
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "type": "block",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "vg_name": "ceph_vg1"
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:        }
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:    ],
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:    "2": [
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:        {
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "devices": [
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "/dev/loop5"
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            ],
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_name": "ceph_lv2",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_size": "21470642176",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "name": "ceph_lv2",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "tags": {
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.cluster_name": "ceph",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.crush_device_class": "",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.encrypted": "0",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.osd_id": "2",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.type": "block",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:                "ceph.vdo": "0"
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            },
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "type": "block",
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:            "vg_name": "ceph_vg2"
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:        }
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]:    ]
Nov 25 18:37:17 np0005535838 admiring_cannon[118585]: }
Nov 25 18:37:17 np0005535838 systemd[1]: libpod-bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2.scope: Deactivated successfully.
Nov 25 18:37:17 np0005535838 conmon[118585]: conmon bbec863d3f25cd3b53b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2.scope/container/memory.events
Nov 25 18:37:17 np0005535838 podman[118546]: 2025-11-25 23:37:17.42442809 +0000 UTC m=+0.880990820 container died bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:37:17 np0005535838 systemd[1]: var-lib-containers-storage-overlay-22f5128b91b261dadf6b8b44ab457cac2c30df9e7796175236c845c4b7108797-merged.mount: Deactivated successfully.
Nov 25 18:37:17 np0005535838 podman[118546]: 2025-11-25 23:37:17.492861077 +0000 UTC m=+0.949423817 container remove bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:37:17 np0005535838 systemd[1]: libpod-conmon-bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2.scope: Deactivated successfully.
Nov 25 18:37:18 np0005535838 podman[118901]: 2025-11-25 23:37:18.049074765 +0000 UTC m=+0.034773795 container create a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:37:18 np0005535838 systemd[1]: Started libpod-conmon-a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222.scope.
Nov 25 18:37:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:18 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:37:18 np0005535838 podman[118901]: 2025-11-25 23:37:18.033677967 +0000 UTC m=+0.019377007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:37:18 np0005535838 podman[118901]: 2025-11-25 23:37:18.137988788 +0000 UTC m=+0.123687858 container init a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 18:37:18 np0005535838 podman[118901]: 2025-11-25 23:37:18.144198236 +0000 UTC m=+0.129897266 container start a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 18:37:18 np0005535838 podman[118901]: 2025-11-25 23:37:18.147495105 +0000 UTC m=+0.133194135 container attach a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:37:18 np0005535838 fervent_khayyam[118941]: 167 167
Nov 25 18:37:18 np0005535838 systemd[1]: libpod-a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222.scope: Deactivated successfully.
Nov 25 18:37:18 np0005535838 podman[118901]: 2025-11-25 23:37:18.150109036 +0000 UTC m=+0.135808106 container died a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 25 18:37:18 np0005535838 systemd[1]: var-lib-containers-storage-overlay-bcfca174ecf074617875eb89a32421a4c61b785eb2e7b1019f224e0fbbd47336-merged.mount: Deactivated successfully.
Nov 25 18:37:18 np0005535838 podman[118901]: 2025-11-25 23:37:18.189267868 +0000 UTC m=+0.174966918 container remove a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:37:18 np0005535838 systemd[1]: libpod-conmon-a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222.scope: Deactivated successfully.
Nov 25 18:37:18 np0005535838 python3.9[118975]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:37:18 np0005535838 podman[118994]: 2025-11-25 23:37:18.410265093 +0000 UTC m=+0.064736557 container create 6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_franklin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:37:18 np0005535838 systemd[1]: Started libpod-conmon-6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce.scope.
Nov 25 18:37:18 np0005535838 podman[118994]: 2025-11-25 23:37:18.387576928 +0000 UTC m=+0.042048482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:37:18 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:37:18 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f29ad36382586aa3ccc201d37b3cfe620ed08d292d1f5ea7e01575afa66d189/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:37:18 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f29ad36382586aa3ccc201d37b3cfe620ed08d292d1f5ea7e01575afa66d189/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:37:18 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f29ad36382586aa3ccc201d37b3cfe620ed08d292d1f5ea7e01575afa66d189/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:37:18 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f29ad36382586aa3ccc201d37b3cfe620ed08d292d1f5ea7e01575afa66d189/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:37:18 np0005535838 podman[118994]: 2025-11-25 23:37:18.49971238 +0000 UTC m=+0.154183854 container init 6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_franklin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 18:37:18 np0005535838 podman[118994]: 2025-11-25 23:37:18.508359944 +0000 UTC m=+0.162831418 container start 6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:37:18 np0005535838 podman[118994]: 2025-11-25 23:37:18.51189283 +0000 UTC m=+0.166364304 container attach 6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 18:37:18 np0005535838 python3.9[119092]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]: {
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "osd_id": 2,
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "type": "bluestore"
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:    },
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "osd_id": 1,
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "type": "bluestore"
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:    },
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "osd_id": 0,
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:        "type": "bluestore"
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]:    }
Nov 25 18:37:19 np0005535838 dreamy_franklin[119032]: }
Nov 25 18:37:19 np0005535838 systemd[1]: libpod-6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce.scope: Deactivated successfully.
Nov 25 18:37:19 np0005535838 systemd[1]: libpod-6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce.scope: Consumed 1.051s CPU time.
Nov 25 18:37:19 np0005535838 podman[118994]: 2025-11-25 23:37:19.552776677 +0000 UTC m=+1.207248151 container died 6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:37:19 np0005535838 systemd[1]: var-lib-containers-storage-overlay-9f29ad36382586aa3ccc201d37b3cfe620ed08d292d1f5ea7e01575afa66d189-merged.mount: Deactivated successfully.
Nov 25 18:37:19 np0005535838 podman[118994]: 2025-11-25 23:37:19.604756987 +0000 UTC m=+1.259228451 container remove 6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_franklin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:37:19 np0005535838 systemd[1]: libpod-conmon-6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce.scope: Deactivated successfully.
Nov 25 18:37:19 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:37:19 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:37:19 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:37:19 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:37:19 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev baefa044-b1f4-4dd6-be5e-d835ba20b098 does not exist
Nov 25 18:37:19 np0005535838 python3.9[119308]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:37:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:37:20 np0005535838 python3.9[119412]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:37:20 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:37:20 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:37:21 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Nov 25 18:37:21 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Nov 25 18:37:21 np0005535838 python3.9[119566]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:37:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v217: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:23 np0005535838 python3.9[119718]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:37:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v218: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:24 np0005535838 python3.9[119802]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:37:25 np0005535838 systemd[1]: session-38.scope: Deactivated successfully.
Nov 25 18:37:25 np0005535838 systemd[1]: session-38.scope: Consumed 26.181s CPU time.
Nov 25 18:37:25 np0005535838 systemd-logind[789]: Session 38 logged out. Waiting for processes to exit.
Nov 25 18:37:25 np0005535838 systemd-logind[789]: Removed session 38.
Nov 25 18:37:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:37:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:37:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:37:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:37:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:37:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:37:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:37:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:26 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Nov 25 18:37:26 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Nov 25 18:37:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v220: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v221: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:37:30 np0005535838 systemd-logind[789]: New session 39 of user zuul.
Nov 25 18:37:30 np0005535838 systemd[1]: Started Session 39 of User zuul.
Nov 25 18:37:31 np0005535838 python3.9[119985]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:37:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v222: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:32 np0005535838 python3.9[120137]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:37:33 np0005535838 python3.9[120215]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:37:33 np0005535838 systemd[1]: session-39.scope: Deactivated successfully.
Nov 25 18:37:33 np0005535838 systemd[1]: session-39.scope: Consumed 1.772s CPU time.
Nov 25 18:37:33 np0005535838 systemd-logind[789]: Session 39 logged out. Waiting for processes to exit.
Nov 25 18:37:33 np0005535838 systemd-logind[789]: Removed session 39.
Nov 25 18:37:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:34 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 25 18:37:34 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 25 18:37:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:37:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v224: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:38 np0005535838 systemd-logind[789]: New session 40 of user zuul.
Nov 25 18:37:38 np0005535838 systemd[1]: Started Session 40 of User zuul.
Nov 25 18:37:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 25 18:37:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 25 18:37:40 np0005535838 python3.9[120395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:37:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v226: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:37:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 25 18:37:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 25 18:37:41 np0005535838 python3.9[120551]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:37:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v227: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:42 np0005535838 python3.9[120726]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:37:42 np0005535838 python3.9[120804]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.pv7yy4w8 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:37:44 np0005535838 python3.9[120956]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:37:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:44 np0005535838 python3.9[121034]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.872vas6e recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:37:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:37:45 np0005535838 python3.9[121186]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:37:45 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 25 18:37:45 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 25 18:37:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v229: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:46 np0005535838 python3.9[121338]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:37:46 np0005535838 python3.9[121416]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:37:47 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 25 18:37:47 np0005535838 python3.9[121568]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:37:47 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 25 18:37:48 np0005535838 python3.9[121646]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:37:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v230: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:48 np0005535838 python3.9[121798]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:37:49 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 25 18:37:49 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 25 18:37:49 np0005535838 python3.9[121950]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:37:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:50 np0005535838 python3.9[122028]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:37:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:37:50 np0005535838 python3.9[122180]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:37:51 np0005535838 python3.9[122258]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:37:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v232: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:52 np0005535838 python3.9[122410]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:37:52 np0005535838 systemd[1]: Reloading.
Nov 25 18:37:52 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:37:52 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:37:53 np0005535838 python3.9[122599]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:37:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v233: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:54 np0005535838 python3.9[122677]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:37:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:37:55 np0005535838 python3.9[122829]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:37:55 np0005535838 python3.9[122907]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:37:56
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'backups', '.mgr', 'cephfs.cephfs.meta', 'images', 'volumes']
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:37:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:56 np0005535838 python3.9[123059]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:37:56 np0005535838 systemd[1]: Reloading.
Nov 25 18:37:57 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:37:57 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:37:57 np0005535838 systemd[1]: Starting Create netns directory...
Nov 25 18:37:57 np0005535838 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 18:37:57 np0005535838 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 18:37:57 np0005535838 systemd[1]: Finished Create netns directory.
Nov 25 18:37:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v235: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:37:58 np0005535838 python3.9[123250]: ansible-ansible.builtin.service_facts Invoked
Nov 25 18:37:58 np0005535838 network[123267]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 18:37:58 np0005535838 network[123268]: 'network-scripts' will be removed from distribution in near future.
Nov 25 18:37:58 np0005535838 network[123269]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 18:38:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v236: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:38:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:38:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v237: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:03 np0005535838 python3.9[123531]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:38:04 np0005535838 python3.9[123609]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:04 np0005535838 python3.9[123761]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:38:05 np0005535838 python3.9[123913]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:38:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v239: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:06 np0005535838 python3.9[123991]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:07 np0005535838 python3.9[124143]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 18:38:07 np0005535838 systemd[1]: Starting Time & Date Service...
Nov 25 18:38:07 np0005535838 systemd[1]: Started Time & Date Service.
Nov 25 18:38:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:08 np0005535838 python3.9[124299]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:09 np0005535838 python3.9[124451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:38:10 np0005535838 python3.9[124529]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v241: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:38:10 np0005535838 python3.9[124681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:38:11 np0005535838 python3.9[124759]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.qp9ja4rk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:12 np0005535838 python3.9[124911]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:38:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v242: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:12 np0005535838 python3.9[124989]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:13 np0005535838 python3.9[125141]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:38:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v243: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:14 np0005535838 python3[125294]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 18:38:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:38:15 np0005535838 python3.9[125448]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:38:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:16 np0005535838 python3.9[125526]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:17 np0005535838 python3.9[125678]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:38:17 np0005535838 python3.9[125756]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v245: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:18 np0005535838 python3.9[125908]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:38:18 np0005535838 python3.9[125986]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:19 np0005535838 python3.9[126138]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:38:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:38:20 np0005535838 python3.9[126316]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:38:20 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:38:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:38:20 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:38:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:38:20 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:38:20 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev c3376adf-74ee-4b59-81a8-36534ac66217 does not exist
Nov 25 18:38:20 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev a4e3ec02-b0c2-4e80-a8a9-620b3ffbd64f does not exist
Nov 25 18:38:20 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev efa512fb-a8bd-47a6-aa72-9b8cb653995b does not exist
Nov 25 18:38:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:38:20 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:38:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:38:20 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:38:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:38:20 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:38:21 np0005535838 python3.9[126598]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:38:21 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:38:21 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:38:21 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:38:21 np0005535838 podman[126648]: 2025-11-25 23:38:21.554799147 +0000 UTC m=+0.065571539 container create 005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 18:38:21 np0005535838 systemd[1]: Started libpod-conmon-005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72.scope.
Nov 25 18:38:21 np0005535838 podman[126648]: 2025-11-25 23:38:21.528105521 +0000 UTC m=+0.038877953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:38:21 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:38:21 np0005535838 podman[126648]: 2025-11-25 23:38:21.701005389 +0000 UTC m=+0.211777871 container init 005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 18:38:21 np0005535838 podman[126648]: 2025-11-25 23:38:21.708888636 +0000 UTC m=+0.219661058 container start 005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:38:21 np0005535838 podman[126648]: 2025-11-25 23:38:21.713155084 +0000 UTC m=+0.223927506 container attach 005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 18:38:21 np0005535838 angry_mendel[126703]: 167 167
Nov 25 18:38:21 np0005535838 systemd[1]: libpod-005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72.scope: Deactivated successfully.
Nov 25 18:38:21 np0005535838 podman[126648]: 2025-11-25 23:38:21.715264952 +0000 UTC m=+0.226037384 container died 005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:38:21 np0005535838 systemd[1]: var-lib-containers-storage-overlay-6cc46effeec7752c5c6491add84b12da2bab2a140c634c3566cc4917b755c06c-merged.mount: Deactivated successfully.
Nov 25 18:38:21 np0005535838 podman[126648]: 2025-11-25 23:38:21.771991667 +0000 UTC m=+0.282764089 container remove 005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 18:38:21 np0005535838 systemd[1]: libpod-conmon-005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72.scope: Deactivated successfully.
Nov 25 18:38:21 np0005535838 python3.9[126744]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:22 np0005535838 podman[126756]: 2025-11-25 23:38:22.00675994 +0000 UTC m=+0.066123784 container create 3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 18:38:22 np0005535838 systemd[1]: Started libpod-conmon-3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640.scope.
Nov 25 18:38:22 np0005535838 podman[126756]: 2025-11-25 23:38:21.976573688 +0000 UTC m=+0.035937572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:38:22 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:38:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea59f65d329332d58b5a399d796d96339693cd368f7f126c7a10bae782a7e6b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:38:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea59f65d329332d58b5a399d796d96339693cd368f7f126c7a10bae782a7e6b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:38:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea59f65d329332d58b5a399d796d96339693cd368f7f126c7a10bae782a7e6b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:38:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea59f65d329332d58b5a399d796d96339693cd368f7f126c7a10bae782a7e6b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:38:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea59f65d329332d58b5a399d796d96339693cd368f7f126c7a10bae782a7e6b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:38:22 np0005535838 podman[126756]: 2025-11-25 23:38:22.133022273 +0000 UTC m=+0.192386157 container init 3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:38:22 np0005535838 podman[126756]: 2025-11-25 23:38:22.139231444 +0000 UTC m=+0.198595288 container start 3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 18:38:22 np0005535838 podman[126756]: 2025-11-25 23:38:22.143006488 +0000 UTC m=+0.202370332 container attach 3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:38:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v247: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:22 np0005535838 python3.9[126928]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:38:23 np0005535838 exciting_goldwasser[126796]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:38:23 np0005535838 exciting_goldwasser[126796]: --> relative data size: 1.0
Nov 25 18:38:23 np0005535838 exciting_goldwasser[126796]: --> All data devices are unavailable
Nov 25 18:38:23 np0005535838 systemd[1]: libpod-3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640.scope: Deactivated successfully.
Nov 25 18:38:23 np0005535838 podman[126756]: 2025-11-25 23:38:23.17947646 +0000 UTC m=+1.238840274 container died 3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:38:23 np0005535838 systemd[1]: var-lib-containers-storage-overlay-ea59f65d329332d58b5a399d796d96339693cd368f7f126c7a10bae782a7e6b0-merged.mount: Deactivated successfully.
Nov 25 18:38:23 np0005535838 podman[126756]: 2025-11-25 23:38:23.262994374 +0000 UTC m=+1.322358188 container remove 3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 18:38:23 np0005535838 systemd[1]: libpod-conmon-3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640.scope: Deactivated successfully.
Nov 25 18:38:23 np0005535838 podman[127260]: 2025-11-25 23:38:23.893567592 +0000 UTC m=+0.054314129 container create 621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shannon, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 18:38:23 np0005535838 python3.9[127240]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:23 np0005535838 systemd[1]: Started libpod-conmon-621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c.scope.
Nov 25 18:38:23 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:38:23 np0005535838 podman[127260]: 2025-11-25 23:38:23.863652508 +0000 UTC m=+0.024399105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:38:23 np0005535838 podman[127260]: 2025-11-25 23:38:23.964488048 +0000 UTC m=+0.125234565 container init 621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shannon, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:38:23 np0005535838 podman[127260]: 2025-11-25 23:38:23.974775902 +0000 UTC m=+0.135522429 container start 621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shannon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 18:38:23 np0005535838 dazzling_shannon[127277]: 167 167
Nov 25 18:38:23 np0005535838 podman[127260]: 2025-11-25 23:38:23.978908346 +0000 UTC m=+0.139654863 container attach 621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 18:38:23 np0005535838 systemd[1]: libpod-621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c.scope: Deactivated successfully.
Nov 25 18:38:23 np0005535838 podman[127260]: 2025-11-25 23:38:23.97979171 +0000 UTC m=+0.140538247 container died 621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 18:38:24 np0005535838 systemd[1]: var-lib-containers-storage-overlay-f8ea403d3896ade7ef3894bf4772f140cf6c6431e8899d8fa05f9b0221f6cdef-merged.mount: Deactivated successfully.
Nov 25 18:38:24 np0005535838 podman[127260]: 2025-11-25 23:38:24.045947564 +0000 UTC m=+0.206694101 container remove 621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shannon, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:38:24 np0005535838 systemd[1]: libpod-conmon-621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c.scope: Deactivated successfully.
Nov 25 18:38:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:24 np0005535838 podman[127330]: 2025-11-25 23:38:24.287682281 +0000 UTC m=+0.061542527 container create 128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 18:38:24 np0005535838 podman[127330]: 2025-11-25 23:38:24.257896189 +0000 UTC m=+0.031756485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:38:24 np0005535838 systemd[1]: Started libpod-conmon-128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956.scope.
Nov 25 18:38:24 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:38:24 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f822ef8cc4421f28d58af02bb6979f0275d5c914589edbf89aca1e5bd3fde3f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:38:24 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f822ef8cc4421f28d58af02bb6979f0275d5c914589edbf89aca1e5bd3fde3f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:38:24 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f822ef8cc4421f28d58af02bb6979f0275d5c914589edbf89aca1e5bd3fde3f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:38:24 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f822ef8cc4421f28d58af02bb6979f0275d5c914589edbf89aca1e5bd3fde3f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:38:24 np0005535838 podman[127330]: 2025-11-25 23:38:24.406837617 +0000 UTC m=+0.180697923 container init 128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:38:24 np0005535838 podman[127330]: 2025-11-25 23:38:24.419817115 +0000 UTC m=+0.193677341 container start 128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:38:24 np0005535838 podman[127330]: 2025-11-25 23:38:24.426504079 +0000 UTC m=+0.200364385 container attach 128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_maxwell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:38:24 np0005535838 python3.9[127478]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]: {
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:    "0": [
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:        {
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "devices": [
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "/dev/loop3"
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            ],
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_name": "ceph_lv0",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_size": "21470642176",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "name": "ceph_lv0",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "tags": {
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.cluster_name": "ceph",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.crush_device_class": "",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.encrypted": "0",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.osd_id": "0",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.type": "block",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.vdo": "0"
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            },
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "type": "block",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "vg_name": "ceph_vg0"
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:        }
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:    ],
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:    "1": [
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:        {
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "devices": [
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "/dev/loop4"
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            ],
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_name": "ceph_lv1",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_size": "21470642176",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "name": "ceph_lv1",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "tags": {
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.cluster_name": "ceph",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.crush_device_class": "",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.encrypted": "0",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.osd_id": "1",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.type": "block",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.vdo": "0"
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            },
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "type": "block",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "vg_name": "ceph_vg1"
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:        }
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:    ],
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:    "2": [
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:        {
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "devices": [
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "/dev/loop5"
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            ],
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_name": "ceph_lv2",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_size": "21470642176",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "name": "ceph_lv2",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "tags": {
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.cluster_name": "ceph",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.crush_device_class": "",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.encrypted": "0",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.osd_id": "2",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.type": "block",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:                "ceph.vdo": "0"
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            },
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "type": "block",
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:            "vg_name": "ceph_vg2"
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:        }
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]:    ]
Nov 25 18:38:25 np0005535838 modest_maxwell[127397]: }
Nov 25 18:38:25 np0005535838 systemd[1]: libpod-128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956.scope: Deactivated successfully.
Nov 25 18:38:25 np0005535838 podman[127330]: 2025-11-25 23:38:25.297961021 +0000 UTC m=+1.071821277 container died 128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:38:25 np0005535838 systemd[1]: var-lib-containers-storage-overlay-f822ef8cc4421f28d58af02bb6979f0275d5c914589edbf89aca1e5bd3fde3f8-merged.mount: Deactivated successfully.
Nov 25 18:38:25 np0005535838 podman[127330]: 2025-11-25 23:38:25.429642342 +0000 UTC m=+1.203502598 container remove 128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 18:38:25 np0005535838 systemd[1]: libpod-conmon-128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956.scope: Deactivated successfully.
Nov 25 18:38:25 np0005535838 python3.9[127649]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:38:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:38:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:38:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:38:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:38:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:38:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v249: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:26 np0005535838 podman[127867]: 2025-11-25 23:38:26.271126617 +0000 UTC m=+0.062025991 container create 27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 25 18:38:26 np0005535838 systemd[1]: Started libpod-conmon-27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15.scope.
Nov 25 18:38:26 np0005535838 podman[127867]: 2025-11-25 23:38:26.247437914 +0000 UTC m=+0.038337258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:38:26 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:38:26 np0005535838 podman[127867]: 2025-11-25 23:38:26.373649344 +0000 UTC m=+0.164548728 container init 27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 18:38:26 np0005535838 podman[127867]: 2025-11-25 23:38:26.387033004 +0000 UTC m=+0.177932338 container start 27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:38:26 np0005535838 podman[127867]: 2025-11-25 23:38:26.390640703 +0000 UTC m=+0.181540037 container attach 27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:38:26 np0005535838 systemd[1]: libpod-27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15.scope: Deactivated successfully.
Nov 25 18:38:26 np0005535838 crazy_lovelace[127906]: 167 167
Nov 25 18:38:26 np0005535838 conmon[127906]: conmon 27248072fcef41e0a957 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15.scope/container/memory.events
Nov 25 18:38:26 np0005535838 podman[127867]: 2025-11-25 23:38:26.395074336 +0000 UTC m=+0.185973700 container died 27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 18:38:26 np0005535838 systemd[1]: var-lib-containers-storage-overlay-a9d6734a22b53bd07006e20f0d297bf603448731f1b2ad9b8dc48bcd8b478e61-merged.mount: Deactivated successfully.
Nov 25 18:38:26 np0005535838 podman[127867]: 2025-11-25 23:38:26.43731366 +0000 UTC m=+0.228212994 container remove 27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:38:26 np0005535838 systemd[1]: libpod-conmon-27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15.scope: Deactivated successfully.
Nov 25 18:38:26 np0005535838 podman[127981]: 2025-11-25 23:38:26.625250313 +0000 UTC m=+0.075928205 container create 6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 18:38:26 np0005535838 podman[127981]: 2025-11-25 23:38:26.571225812 +0000 UTC m=+0.021903724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:38:26 np0005535838 python3.9[127975]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 18:38:26 np0005535838 systemd[1]: Started libpod-conmon-6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03.scope.
Nov 25 18:38:26 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:38:26 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05df26fa14b98d12c916101f28f601c3b9e83254bfebbe7c08807b9e533956d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:38:26 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05df26fa14b98d12c916101f28f601c3b9e83254bfebbe7c08807b9e533956d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:38:26 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05df26fa14b98d12c916101f28f601c3b9e83254bfebbe7c08807b9e533956d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:38:26 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05df26fa14b98d12c916101f28f601c3b9e83254bfebbe7c08807b9e533956d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:38:26 np0005535838 podman[127981]: 2025-11-25 23:38:26.789607815 +0000 UTC m=+0.240285767 container init 6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:38:26 np0005535838 podman[127981]: 2025-11-25 23:38:26.799625842 +0000 UTC m=+0.250303764 container start 6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 18:38:26 np0005535838 podman[127981]: 2025-11-25 23:38:26.825583647 +0000 UTC m=+0.276261579 container attach 6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:38:27 np0005535838 python3.9[128154]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]: {
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "osd_id": 2,
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "type": "bluestore"
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:    },
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "osd_id": 1,
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "type": "bluestore"
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:    },
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "osd_id": 0,
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:        "type": "bluestore"
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]:    }
Nov 25 18:38:27 np0005535838 quizzical_kapitsa[127998]: }
Nov 25 18:38:27 np0005535838 systemd[1]: libpod-6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03.scope: Deactivated successfully.
Nov 25 18:38:27 np0005535838 systemd[1]: libpod-6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03.scope: Consumed 1.008s CPU time.
Nov 25 18:38:27 np0005535838 podman[128207]: 2025-11-25 23:38:27.864567849 +0000 UTC m=+0.044133718 container died 6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 18:38:27 np0005535838 systemd[1]: var-lib-containers-storage-overlay-05df26fa14b98d12c916101f28f601c3b9e83254bfebbe7c08807b9e533956d4-merged.mount: Deactivated successfully.
Nov 25 18:38:27 np0005535838 systemd[1]: session-40.scope: Deactivated successfully.
Nov 25 18:38:27 np0005535838 systemd[1]: session-40.scope: Consumed 36.235s CPU time.
Nov 25 18:38:27 np0005535838 systemd-logind[789]: Session 40 logged out. Waiting for processes to exit.
Nov 25 18:38:27 np0005535838 systemd-logind[789]: Removed session 40.
Nov 25 18:38:27 np0005535838 podman[128207]: 2025-11-25 23:38:27.942705814 +0000 UTC m=+0.122271683 container remove 6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:38:27 np0005535838 systemd[1]: libpod-conmon-6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03.scope: Deactivated successfully.
Nov 25 18:38:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:38:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:38:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:38:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:38:28 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 5a5b85cb-e577-42ac-8a68-307b78764378 does not exist
Nov 25 18:38:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v250: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:29 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:38:29 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:38:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v251: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:38:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v252: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:33 np0005535838 systemd-logind[789]: New session 41 of user zuul.
Nov 25 18:38:33 np0005535838 systemd[1]: Started Session 41 of User zuul.
Nov 25 18:38:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v253: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:34 np0005535838 python3.9[128427]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 18:38:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:38:35 np0005535838 python3.9[128579]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:38:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v254: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:36 np0005535838 python3.9[128733]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 25 18:38:37 np0005535838 python3.9[128885]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.zboni6_q follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:38:37 np0005535838 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 18:38:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:38 np0005535838 python3.9[129012]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.zboni6_q mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113916.684458-44-193719256025881/.source.zboni6_q _original_basename=.rc3_2wdh follow=False checksum=e1ea54a4c377a432c95b6fe332b8f5ad4e4245e0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:39 np0005535838 python3.9[129164]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:38:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v256: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:38:40 np0005535838 python3.9[129316]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxrfdY9cGWIXdy/1Oy3o25kkem+UfkfNZM3QAaYeemr9vZEt0Kpt4rTEaZjtK/HkgMSoli0ko2twHhREfmcDjCZiEvSPhpr9yvJyxLe6m3r7nR2fIVc/1+5SeUdcJGWT8hvgD5okMZtCerl/MiW6+tFRt7Ar6X2TFlwXPjq3wia85WpL7X9vq40wZz0XlbpQxNxcEJWeVajcrd63Qib0m1FmhnmHPUqLHN0WmxXnMtONzo4fUQjq3zn230bIZCmjbFatl10s4NRy2udfAA7Xi0ubCZxQ/E8omg7y4ZxA94dJHZPmkCFSVLZUqdW3S3Ofhcem+PFVKRR2UvfcYHi79G6lS5brk3pbHqdyjd4/3scYp3aXFFt7ErEEhVud762RLGAHeACGlJQxmX8B/FbnWmbkw8BfptrYtzSuSqIXmN3UXrLrmfRrB+IMcIbbs/vzVMk6n6BzUjdXscFfnPltHEyvmdeIEBDyC5FLoJ2bTTrQpLt63pLIU09IA55rhBA+E=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBQX5RNdc24Y/t6cF9q9hL3e4G9bhmnpPT/NJWIujGtr#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKt48jJg/HSNlIL9ftEIQgyUPOj8qZ1KotNNqzrVPi+UhJTDsaDnHI9k4z0iWOz87RQtpHNoPDx9+/vOjXzjj4o=#012 create=True mode=0644 path=/tmp/ansible.zboni6_q state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:41 np0005535838 python3.9[129468]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.zboni6_q' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:38:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v257: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:42 np0005535838 python3.9[129622]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.zboni6_q state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:43 np0005535838 systemd-logind[789]: Session 41 logged out. Waiting for processes to exit.
Nov 25 18:38:43 np0005535838 systemd[1]: session-41.scope: Deactivated successfully.
Nov 25 18:38:43 np0005535838 systemd[1]: session-41.scope: Consumed 6.665s CPU time.
Nov 25 18:38:43 np0005535838 systemd-logind[789]: Removed session 41.
Nov 25 18:38:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v258: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:38:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v259: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v260: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:48 np0005535838 systemd-logind[789]: New session 42 of user zuul.
Nov 25 18:38:48 np0005535838 systemd[1]: Started Session 42 of User zuul.
Nov 25 18:38:49 np0005535838 python3.9[129802]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:38:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v261: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:38:50 np0005535838 python3.9[129958]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 18:38:51 np0005535838 python3.9[130112]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:38:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v262: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:52 np0005535838 systemd[1]: session-18.scope: Deactivated successfully.
Nov 25 18:38:52 np0005535838 systemd[1]: session-18.scope: Consumed 1min 27.470s CPU time.
Nov 25 18:38:52 np0005535838 systemd-logind[789]: Session 18 logged out. Waiting for processes to exit.
Nov 25 18:38:52 np0005535838 systemd-logind[789]: Removed session 18.
Nov 25 18:38:53 np0005535838 python3.9[130265]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:38:54 np0005535838 python3.9[130418]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:38:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v263: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:55 np0005535838 python3.9[130570]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:38:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:38:55 np0005535838 systemd[1]: session-42.scope: Deactivated successfully.
Nov 25 18:38:55 np0005535838 systemd[1]: session-42.scope: Consumed 4.560s CPU time.
Nov 25 18:38:55 np0005535838 systemd-logind[789]: Session 42 logged out. Waiting for processes to exit.
Nov 25 18:38:55 np0005535838 systemd-logind[789]: Removed session 42.
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:38:56
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['.mgr', 'backups', 'images', 'volumes', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:38:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v264: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:38:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v266: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:39:00 np0005535838 systemd-logind[789]: New session 43 of user zuul.
Nov 25 18:39:00 np0005535838 systemd[1]: Started Session 43 of User zuul.
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:39:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:39:01 np0005535838 python3.9[130750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:39:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v267: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:02 np0005535838 python3.9[130906]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:39:03 np0005535838 python3.9[130990]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 18:39:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:39:06 np0005535838 python3.9[131141]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:39:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v269: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:07 np0005535838 python3.9[131292]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 18:39:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v270: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:08 np0005535838 python3.9[131442]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:39:09 np0005535838 python3.9[131592]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:39:09 np0005535838 systemd[1]: session-43.scope: Deactivated successfully.
Nov 25 18:39:09 np0005535838 systemd[1]: session-43.scope: Consumed 6.458s CPU time.
Nov 25 18:39:09 np0005535838 systemd-logind[789]: Session 43 logged out. Waiting for processes to exit.
Nov 25 18:39:09 np0005535838 systemd-logind[789]: Removed session 43.
Nov 25 18:39:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:39:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v272: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:39:15 np0005535838 systemd-logind[789]: New session 44 of user zuul.
Nov 25 18:39:15 np0005535838 systemd[1]: Started Session 44 of User zuul.
Nov 25 18:39:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v274: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:16 np0005535838 python3.9[131770]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:39:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:18 np0005535838 python3.9[131926]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:39:19 np0005535838 python3.9[132078]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:39:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:39:20 np0005535838 python3.9[132230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:21 np0005535838 python3.9[132353]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113959.6805308-65-268349530606722/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=708a802323a417e1d7112a11e86e380054abde76 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:22 np0005535838 python3.9[132505]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v277: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:22 np0005535838 python3.9[132628]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113961.4226327-65-95726541018487/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=b24b90e66ef71edc8d3c31d00769c6c66b0b4046 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:23 np0005535838 python3.9[132780]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:24 np0005535838 python3.9[132903]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113962.92012-65-239447004265498/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f6acf4e4a587aa842132b9a48c89216027c4f3e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:25 np0005535838 python3.9[133055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:39:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:39:25 np0005535838 python3.9[133207]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:39:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:39:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:39:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:39:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:39:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:39:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:39:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:26 np0005535838 python3.9[133359]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:27 np0005535838 python3.9[133482]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113966.1543388-124-209122477058586/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=acafaa52eb7a56de33ea8a98161fffe977c5e7c8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:28 np0005535838 python3.9[133634]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:28 np0005535838 python3.9[133857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113967.6123862-124-235392930615/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d9b8f08c6aff6f6c46c7d6bb501615569a442e0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:39:29 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 70784659-3f49-416b-889d-9874e52e9ff9 does not exist
Nov 25 18:39:29 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev be3046af-7de4-4037-b296-96a53e432a23 does not exist
Nov 25 18:39:29 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 893e1cdb-5359-436b-a355-30b9a7b4e756 does not exist
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.157536) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113969157664, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6639, "num_deletes": 251, "total_data_size": 7389855, "memory_usage": 7597936, "flush_reason": "Manual Compaction"}
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113969192935, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 5579081, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 6777, "table_properties": {"data_size": 5555946, "index_size": 14957, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7109, "raw_key_size": 63013, "raw_average_key_size": 22, "raw_value_size": 5501803, "raw_average_value_size": 1936, "num_data_blocks": 670, "num_entries": 2841, "num_filter_entries": 2841, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113470, "oldest_key_time": 1764113470, "file_creation_time": 1764113969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 35930 microseconds, and 21421 cpu microseconds.
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.193459) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 5579081 bytes OK
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.193665) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.195350) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.195374) EVENT_LOG_v1 {"time_micros": 1764113969195365, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.195403) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 7361779, prev total WAL file size 7361779, number of live WAL files 2.
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.199492) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(5448KB) 13(53KB) 8(1944B)]
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113969199647, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 5636282, "oldest_snapshot_seqno": -1}
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 2657 keys, 5591633 bytes, temperature: kUnknown
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113969241812, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 5591633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5568878, "index_size": 15030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6661, "raw_key_size": 61085, "raw_average_key_size": 22, "raw_value_size": 5516351, "raw_average_value_size": 2076, "num_data_blocks": 673, "num_entries": 2657, "num_filter_entries": 2657, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764113969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.242152) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 5591633 bytes
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.243913) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.4 rd, 132.3 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(5.4, 0.0 +0.0 blob) out(5.3 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2947, records dropped: 290 output_compression: NoCompression
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.243945) EVENT_LOG_v1 {"time_micros": 1764113969243929, "job": 4, "event": "compaction_finished", "compaction_time_micros": 42262, "compaction_time_cpu_micros": 24867, "output_level": 6, "num_output_files": 1, "total_output_size": 5591633, "num_input_records": 2947, "num_output_records": 2657, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113969246543, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113969246645, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113969246705, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.199358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:39:29 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:39:29 np0005535838 python3.9[134113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:29 np0005535838 podman[134234]: 2025-11-25 23:39:29.845146171 +0000 UTC m=+0.044749353 container create e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 18:39:29 np0005535838 systemd[1]: Started libpod-conmon-e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a.scope.
Nov 25 18:39:29 np0005535838 podman[134234]: 2025-11-25 23:39:29.825074759 +0000 UTC m=+0.024677991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:39:29 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:39:29 np0005535838 podman[134234]: 2025-11-25 23:39:29.940128677 +0000 UTC m=+0.139731949 container init e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 18:39:29 np0005535838 podman[134234]: 2025-11-25 23:39:29.949537377 +0000 UTC m=+0.149140559 container start e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:39:29 np0005535838 podman[134234]: 2025-11-25 23:39:29.953207968 +0000 UTC m=+0.152811170 container attach e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:39:29 np0005535838 blissful_elion[134288]: 167 167
Nov 25 18:39:29 np0005535838 systemd[1]: libpod-e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a.scope: Deactivated successfully.
Nov 25 18:39:29 np0005535838 conmon[134288]: conmon e81706e9817ac3e5d0b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a.scope/container/memory.events
Nov 25 18:39:29 np0005535838 podman[134234]: 2025-11-25 23:39:29.958256887 +0000 UTC m=+0.157860089 container died e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_elion, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:39:29 np0005535838 systemd[1]: var-lib-containers-storage-overlay-935ed644a086e8888fa5c739cd429c5be0b0c0e9879be0a91b476cce89d1db16-merged.mount: Deactivated successfully.
Nov 25 18:39:30 np0005535838 podman[134234]: 2025-11-25 23:39:30.001493268 +0000 UTC m=+0.201096460 container remove e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:39:30 np0005535838 systemd[1]: libpod-conmon-e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a.scope: Deactivated successfully.
Nov 25 18:39:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v281: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:30 np0005535838 podman[134344]: 2025-11-25 23:39:30.194596606 +0000 UTC m=+0.043594551 container create 95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:39:30 np0005535838 python3.9[134338]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113969.0418453-124-71774113746416/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0f628f24b5a0cafde1c45217ffb210d911430796 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:30 np0005535838 systemd[1]: Started libpod-conmon-95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3.scope.
Nov 25 18:39:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:39:30 np0005535838 podman[134344]: 2025-11-25 23:39:30.179006287 +0000 UTC m=+0.028004242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:39:30 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:39:30 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734d84fa67c61456db7a3d20ed42bfa1d0ddfd607dff90900b0ab27cabee3703/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:39:30 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734d84fa67c61456db7a3d20ed42bfa1d0ddfd607dff90900b0ab27cabee3703/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:39:30 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734d84fa67c61456db7a3d20ed42bfa1d0ddfd607dff90900b0ab27cabee3703/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:39:30 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734d84fa67c61456db7a3d20ed42bfa1d0ddfd607dff90900b0ab27cabee3703/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:39:30 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734d84fa67c61456db7a3d20ed42bfa1d0ddfd607dff90900b0ab27cabee3703/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:39:30 np0005535838 podman[134344]: 2025-11-25 23:39:30.303314161 +0000 UTC m=+0.152312116 container init 95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 18:39:30 np0005535838 podman[134344]: 2025-11-25 23:39:30.317992015 +0000 UTC m=+0.166989950 container start 95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 18:39:30 np0005535838 podman[134344]: 2025-11-25 23:39:30.321756299 +0000 UTC m=+0.170754264 container attach 95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:39:31 np0005535838 python3.9[134517]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:39:31 np0005535838 zen_booth[134361]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:39:31 np0005535838 zen_booth[134361]: --> relative data size: 1.0
Nov 25 18:39:31 np0005535838 zen_booth[134361]: --> All data devices are unavailable
Nov 25 18:39:31 np0005535838 systemd[1]: libpod-95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3.scope: Deactivated successfully.
Nov 25 18:39:31 np0005535838 systemd[1]: libpod-95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3.scope: Consumed 1.010s CPU time.
Nov 25 18:39:31 np0005535838 podman[134344]: 2025-11-25 23:39:31.405699702 +0000 UTC m=+1.254697657 container died 95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 18:39:31 np0005535838 systemd[1]: var-lib-containers-storage-overlay-734d84fa67c61456db7a3d20ed42bfa1d0ddfd607dff90900b0ab27cabee3703-merged.mount: Deactivated successfully.
Nov 25 18:39:31 np0005535838 podman[134344]: 2025-11-25 23:39:31.473746317 +0000 UTC m=+1.322744262 container remove 95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 18:39:31 np0005535838 systemd[1]: libpod-conmon-95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3.scope: Deactivated successfully.
Nov 25 18:39:31 np0005535838 python3.9[134729]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:39:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:32 np0005535838 podman[134923]: 2025-11-25 23:39:32.218342524 +0000 UTC m=+0.051258082 container create 3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:39:32 np0005535838 systemd[1]: Started libpod-conmon-3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c.scope.
Nov 25 18:39:32 np0005535838 podman[134923]: 2025-11-25 23:39:32.195935217 +0000 UTC m=+0.028850845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:39:32 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:39:32 np0005535838 podman[134923]: 2025-11-25 23:39:32.33291837 +0000 UTC m=+0.165834008 container init 3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 18:39:32 np0005535838 podman[134923]: 2025-11-25 23:39:32.343350727 +0000 UTC m=+0.176266315 container start 3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:39:32 np0005535838 podman[134923]: 2025-11-25 23:39:32.347614095 +0000 UTC m=+0.180529693 container attach 3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 18:39:32 np0005535838 charming_jackson[134962]: 167 167
Nov 25 18:39:32 np0005535838 systemd[1]: libpod-3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c.scope: Deactivated successfully.
Nov 25 18:39:32 np0005535838 podman[134923]: 2025-11-25 23:39:32.353523997 +0000 UTC m=+0.186439575 container died 3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 18:39:32 np0005535838 systemd[1]: var-lib-containers-storage-overlay-1765c60143c495ae4ced75202631be7a08601e36037577d665b815adb2226951-merged.mount: Deactivated successfully.
Nov 25 18:39:32 np0005535838 podman[134923]: 2025-11-25 23:39:32.400978404 +0000 UTC m=+0.233893962 container remove 3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:39:32 np0005535838 systemd[1]: libpod-conmon-3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c.scope: Deactivated successfully.
Nov 25 18:39:32 np0005535838 podman[135039]: 2025-11-25 23:39:32.634344342 +0000 UTC m=+0.065018152 container create f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:39:32 np0005535838 python3.9[135033]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:32 np0005535838 systemd[1]: Started libpod-conmon-f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c.scope.
Nov 25 18:39:32 np0005535838 podman[135039]: 2025-11-25 23:39:32.60776357 +0000 UTC m=+0.038437390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:39:32 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:39:32 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226a952aa181c51da27f8819c5983527197a02a5ee4bdaa7edf7ece42030577c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:39:32 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226a952aa181c51da27f8819c5983527197a02a5ee4bdaa7edf7ece42030577c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:39:32 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226a952aa181c51da27f8819c5983527197a02a5ee4bdaa7edf7ece42030577c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:39:32 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226a952aa181c51da27f8819c5983527197a02a5ee4bdaa7edf7ece42030577c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:39:32 np0005535838 podman[135039]: 2025-11-25 23:39:32.753043771 +0000 UTC m=+0.183717571 container init f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:39:32 np0005535838 podman[135039]: 2025-11-25 23:39:32.765536135 +0000 UTC m=+0.196209955 container start f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 18:39:32 np0005535838 podman[135039]: 2025-11-25 23:39:32.771052357 +0000 UTC m=+0.201726147 container attach f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 18:39:33 np0005535838 python3.9[135184]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113972.066453-183-146216826534562/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=82490ca0d4ca448635a334617268049830ebd7a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]: {
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:    "0": [
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:        {
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "devices": [
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "/dev/loop3"
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            ],
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_name": "ceph_lv0",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_size": "21470642176",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "name": "ceph_lv0",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "tags": {
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.cluster_name": "ceph",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.crush_device_class": "",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.encrypted": "0",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.osd_id": "0",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.type": "block",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.vdo": "0"
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            },
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "type": "block",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "vg_name": "ceph_vg0"
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:        }
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:    ],
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:    "1": [
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:        {
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "devices": [
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "/dev/loop4"
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            ],
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_name": "ceph_lv1",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_size": "21470642176",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "name": "ceph_lv1",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "tags": {
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.cluster_name": "ceph",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.crush_device_class": "",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.encrypted": "0",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.osd_id": "1",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.type": "block",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.vdo": "0"
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            },
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "type": "block",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "vg_name": "ceph_vg1"
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:        }
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:    ],
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:    "2": [
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:        {
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "devices": [
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "/dev/loop5"
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            ],
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_name": "ceph_lv2",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_size": "21470642176",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "name": "ceph_lv2",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "tags": {
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.cluster_name": "ceph",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.crush_device_class": "",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.encrypted": "0",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.osd_id": "2",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.type": "block",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:                "ceph.vdo": "0"
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            },
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "type": "block",
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:            "vg_name": "ceph_vg2"
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:        }
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]:    ]
Nov 25 18:39:33 np0005535838 happy_ptolemy[135057]: }
Nov 25 18:39:33 np0005535838 systemd[1]: libpod-f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c.scope: Deactivated successfully.
Nov 25 18:39:33 np0005535838 podman[135039]: 2025-11-25 23:39:33.55941269 +0000 UTC m=+0.990086470 container died f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 18:39:33 np0005535838 systemd[1]: var-lib-containers-storage-overlay-226a952aa181c51da27f8819c5983527197a02a5ee4bdaa7edf7ece42030577c-merged.mount: Deactivated successfully.
Nov 25 18:39:33 np0005535838 podman[135039]: 2025-11-25 23:39:33.617927911 +0000 UTC m=+1.048601701 container remove f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:39:33 np0005535838 systemd[1]: libpod-conmon-f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c.scope: Deactivated successfully.
Nov 25 18:39:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v283: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:34 np0005535838 python3.9[135450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:34 np0005535838 podman[135514]: 2025-11-25 23:39:34.419057986 +0000 UTC m=+0.049004521 container create 2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclean, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 18:39:34 np0005535838 systemd[1]: Started libpod-conmon-2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f.scope.
Nov 25 18:39:34 np0005535838 podman[135514]: 2025-11-25 23:39:34.399793655 +0000 UTC m=+0.029740180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:39:34 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:39:34 np0005535838 podman[135514]: 2025-11-25 23:39:34.516064487 +0000 UTC m=+0.146011032 container init 2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:39:34 np0005535838 podman[135514]: 2025-11-25 23:39:34.52559386 +0000 UTC m=+0.155540375 container start 2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:39:34 np0005535838 podman[135514]: 2025-11-25 23:39:34.529013345 +0000 UTC m=+0.158959860 container attach 2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 18:39:34 np0005535838 lucid_mclean[135554]: 167 167
Nov 25 18:39:34 np0005535838 systemd[1]: libpod-2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f.scope: Deactivated successfully.
Nov 25 18:39:34 np0005535838 podman[135514]: 2025-11-25 23:39:34.533767385 +0000 UTC m=+0.163713900 container died 2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclean, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:39:34 np0005535838 systemd[1]: var-lib-containers-storage-overlay-ee3e201b3e1ad1a87f8e14d6ff3104296581143b26b5c5b2b263939aaffe54ad-merged.mount: Deactivated successfully.
Nov 25 18:39:34 np0005535838 podman[135514]: 2025-11-25 23:39:34.573126589 +0000 UTC m=+0.203073104 container remove 2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 18:39:34 np0005535838 systemd[1]: libpod-conmon-2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f.scope: Deactivated successfully.
Nov 25 18:39:34 np0005535838 podman[135645]: 2025-11-25 23:39:34.784222683 +0000 UTC m=+0.056295321 container create 28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:39:34 np0005535838 systemd[1]: Started libpod-conmon-28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3.scope.
Nov 25 18:39:34 np0005535838 podman[135645]: 2025-11-25 23:39:34.765350983 +0000 UTC m=+0.037423671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:39:34 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:39:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d2ffa4f6670520e56e4478f42c2290e8e2c108f4b23ed5b921db8b0847b399/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:39:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d2ffa4f6670520e56e4478f42c2290e8e2c108f4b23ed5b921db8b0847b399/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:39:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d2ffa4f6670520e56e4478f42c2290e8e2c108f4b23ed5b921db8b0847b399/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:39:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d2ffa4f6670520e56e4478f42c2290e8e2c108f4b23ed5b921db8b0847b399/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:39:34 np0005535838 podman[135645]: 2025-11-25 23:39:34.910242594 +0000 UTC m=+0.182315292 container init 28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 18:39:34 np0005535838 podman[135645]: 2025-11-25 23:39:34.921137225 +0000 UTC m=+0.193209883 container start 28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:39:34 np0005535838 podman[135645]: 2025-11-25 23:39:34.924424445 +0000 UTC m=+0.196497093 container attach 28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 25 18:39:34 np0005535838 python3.9[135665]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113973.6404994-183-260501623802104/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d9b8f08c6aff6f6c46c7d6bb501615569a442e0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:39:35 np0005535838 python3.9[135834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]: {
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "osd_id": 2,
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "type": "bluestore"
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:    },
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "osd_id": 1,
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "type": "bluestore"
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:    },
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "osd_id": 0,
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:        "type": "bluestore"
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]:    }
Nov 25 18:39:35 np0005535838 stupefied_dewdney[135670]: }
Nov 25 18:39:36 np0005535838 systemd[1]: libpod-28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3.scope: Deactivated successfully.
Nov 25 18:39:36 np0005535838 systemd[1]: libpod-28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3.scope: Consumed 1.089s CPU time.
Nov 25 18:39:36 np0005535838 podman[135645]: 2025-11-25 23:39:36.006371364 +0000 UTC m=+1.278444042 container died 28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dewdney, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:39:36 np0005535838 systemd[1]: var-lib-containers-storage-overlay-a4d2ffa4f6670520e56e4478f42c2290e8e2c108f4b23ed5b921db8b0847b399-merged.mount: Deactivated successfully.
Nov 25 18:39:36 np0005535838 podman[135645]: 2025-11-25 23:39:36.062239233 +0000 UTC m=+1.334311871 container remove 28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dewdney, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:39:36 np0005535838 systemd[1]: libpod-conmon-28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3.scope: Deactivated successfully.
Nov 25 18:39:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:39:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:39:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:39:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:39:36 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 8646aa4b-d4cd-4c90-8c4f-732837c37d3b does not exist
Nov 25 18:39:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v284: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:36 np0005535838 python3.9[136042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113975.1989286-183-222597940016056/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=4cd48abd63382b9dcb37144a7ad2a1da07f0d3bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:37 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:39:37 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:39:38 np0005535838 python3.9[136194]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:39:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:38 np0005535838 python3.9[136346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:39 np0005535838 python3.9[136469]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113978.253518-251-245657119477948/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=70850e65b5e36c1a89abd37d8f250d78131a9b48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v286: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:39:40 np0005535838 python3.9[136621]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:39:41 np0005535838 python3.9[136773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:41 np0005535838 python3.9[136896]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113980.4917448-275-256866420861290/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=70850e65b5e36c1a89abd37d8f250d78131a9b48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:42 np0005535838 python3.9[137048]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:39:43 np0005535838 python3.9[137200]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:44 np0005535838 python3.9[137323]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113982.9753563-299-65149963766971/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=70850e65b5e36c1a89abd37d8f250d78131a9b48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v288: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:44 np0005535838 python3.9[137475]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:39:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:39:45 np0005535838 python3.9[137627]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v289: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:46 np0005535838 python3.9[137750]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113985.197418-323-245228928062019/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=70850e65b5e36c1a89abd37d8f250d78131a9b48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:47 np0005535838 python3.9[137902]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:39:47 np0005535838 python3.9[138054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:48 np0005535838 python3.9[138177]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113987.368622-347-152964366831669/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=70850e65b5e36c1a89abd37d8f250d78131a9b48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:49 np0005535838 python3.9[138329]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:39:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v291: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:39:50 np0005535838 python3.9[138481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:39:51 np0005535838 python3.9[138604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113989.7105746-371-155055669596739/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=70850e65b5e36c1a89abd37d8f250d78131a9b48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:51 np0005535838 systemd-logind[789]: Session 44 logged out. Waiting for processes to exit.
Nov 25 18:39:51 np0005535838 systemd[1]: session-44.scope: Deactivated successfully.
Nov 25 18:39:51 np0005535838 systemd[1]: session-44.scope: Consumed 27.505s CPU time.
Nov 25 18:39:51 np0005535838 systemd-logind[789]: Removed session 44.
Nov 25 18:39:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v292: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v293: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:39:56
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:39:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v294: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:57 np0005535838 systemd-logind[789]: New session 45 of user zuul.
Nov 25 18:39:57 np0005535838 systemd[1]: Started Session 45 of User zuul.
Nov 25 18:39:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:39:58 np0005535838 python3.9[138784]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:39:59 np0005535838 python3.9[138936]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v296: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:40:00 np0005535838 python3.9[139059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113999.044084-34-113953875962895/.source.conf _original_basename=ceph.conf follow=False checksum=2731b3c25c88107bdeb6ffd28d9d5d2aeb7ab117 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:01 np0005535838 python3.9[139211]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:40:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:40:01 np0005535838 python3.9[139334]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114000.8133557-34-69613324286698/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=7118a3e4848d5b96f84dfc7266d24215d2762b5c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v297: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:02 np0005535838 systemd[1]: session-45.scope: Deactivated successfully.
Nov 25 18:40:02 np0005535838 systemd[1]: session-45.scope: Consumed 3.169s CPU time.
Nov 25 18:40:02 np0005535838 systemd-logind[789]: Session 45 logged out. Waiting for processes to exit.
Nov 25 18:40:02 np0005535838 systemd-logind[789]: Removed session 45.
Nov 25 18:40:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v298: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:40:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v299: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v300: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:08 np0005535838 systemd-logind[789]: New session 46 of user zuul.
Nov 25 18:40:08 np0005535838 systemd[1]: Started Session 46 of User zuul.
Nov 25 18:40:09 np0005535838 python3.9[139512]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:40:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:40:11 np0005535838 python3.9[139672]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:40:11 np0005535838 python3.9[139824]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:40:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v302: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:12 np0005535838 python3.9[139974]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:40:13 np0005535838 python3.9[140126]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 18:40:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:40:15 np0005535838 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 25 18:40:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:16 np0005535838 python3.9[140282]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:40:17 np0005535838 python3.9[140366]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:40:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v305: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:19 np0005535838 python3.9[140519]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 18:40:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v306: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:40:21 np0005535838 python3[140674]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 25 18:40:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v307: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:22 np0005535838 python3.9[140826]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:23 np0005535838 python3.9[140978]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v308: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:24 np0005535838 python3.9[141056]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:25 np0005535838 python3.9[141208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:40:25 np0005535838 python3.9[141286]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.lsm9ftva recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:40:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:40:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:40:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:40:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:40:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:40:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v309: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:26 np0005535838 python3.9[141438]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:27 np0005535838 python3.9[141516]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:28 np0005535838 python3.9[141668]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:40:29 np0005535838 python3[141821]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 18:40:30 np0005535838 python3.9[141973]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v311: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:40:30 np0005535838 python3.9[142098]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114029.4797447-157-161243024922662/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:31 np0005535838 python3.9[142250]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:32 np0005535838 python3.9[142375]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114031.2438476-172-226970138791403/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:33 np0005535838 python3.9[142529]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v313: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:34 np0005535838 python3.9[142654]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114032.8423355-187-265472278721859/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:34 np0005535838 python3.9[142806]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:40:35 np0005535838 python3.9[142931]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114034.4581976-202-86526455506206/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:36 np0005535838 python3.9[143108]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:40:37 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 76468d1e-4f2e-49c1-b3ec-90ee88a19724 does not exist
Nov 25 18:40:37 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 84fb1ff5-ab67-4473-a789-de4ac3c61791 does not exist
Nov 25 18:40:37 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 387496ba-f49c-4711-b60a-d2e1faefac8c does not exist
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:40:37 np0005535838 python3.9[143337]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114035.9360292-217-4637282991207/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:40:37 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:40:37 np0005535838 podman[143580]: 2025-11-25 23:40:37.80066877 +0000 UTC m=+0.043229943 container create 72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 18:40:37 np0005535838 systemd[1]: Started libpod-conmon-72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83.scope.
Nov 25 18:40:37 np0005535838 podman[143580]: 2025-11-25 23:40:37.783187669 +0000 UTC m=+0.025748842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:40:37 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:40:37 np0005535838 podman[143580]: 2025-11-25 23:40:37.926359389 +0000 UTC m=+0.168920612 container init 72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 18:40:37 np0005535838 podman[143580]: 2025-11-25 23:40:37.944762275 +0000 UTC m=+0.187323488 container start 72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 18:40:37 np0005535838 podman[143580]: 2025-11-25 23:40:37.948607947 +0000 UTC m=+0.191169130 container attach 72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 18:40:37 np0005535838 gallant_knuth[143620]: 167 167
Nov 25 18:40:37 np0005535838 systemd[1]: libpod-72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83.scope: Deactivated successfully.
Nov 25 18:40:37 np0005535838 podman[143580]: 2025-11-25 23:40:37.951490343 +0000 UTC m=+0.194051526 container died 72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:40:37 np0005535838 systemd[1]: var-lib-containers-storage-overlay-8b5961e259e82ef71a840599fc8fa64c39b84b387a9b40a3db7a363d25befabd-merged.mount: Deactivated successfully.
Nov 25 18:40:38 np0005535838 podman[143580]: 2025-11-25 23:40:38.004573345 +0000 UTC m=+0.247134528 container remove 72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:40:38 np0005535838 systemd[1]: libpod-conmon-72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83.scope: Deactivated successfully.
Nov 25 18:40:38 np0005535838 python3.9[143653]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:38 np0005535838 podman[143676]: 2025-11-25 23:40:38.141026388 +0000 UTC m=+0.042061992 container create 5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kapitsa, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:40:38 np0005535838 systemd[1]: Started libpod-conmon-5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd.scope.
Nov 25 18:40:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v315: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:38 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:40:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ac20287d5b334a2b74cb68b479c82c14e45d92bfdcb387978e1969d6562d91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:40:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ac20287d5b334a2b74cb68b479c82c14e45d92bfdcb387978e1969d6562d91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:40:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ac20287d5b334a2b74cb68b479c82c14e45d92bfdcb387978e1969d6562d91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:40:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ac20287d5b334a2b74cb68b479c82c14e45d92bfdcb387978e1969d6562d91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:40:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ac20287d5b334a2b74cb68b479c82c14e45d92bfdcb387978e1969d6562d91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:40:38 np0005535838 podman[143676]: 2025-11-25 23:40:38.121698747 +0000 UTC m=+0.022734381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:40:38 np0005535838 podman[143676]: 2025-11-25 23:40:38.225659663 +0000 UTC m=+0.126695277 container init 5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:40:38 np0005535838 podman[143676]: 2025-11-25 23:40:38.241712667 +0000 UTC m=+0.142748301 container start 5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:40:38 np0005535838 podman[143676]: 2025-11-25 23:40:38.246514754 +0000 UTC m=+0.147550378 container attach 5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:40:38 np0005535838 python3.9[143849]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:40:39 np0005535838 strange_kapitsa[143697]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:40:39 np0005535838 strange_kapitsa[143697]: --> relative data size: 1.0
Nov 25 18:40:39 np0005535838 strange_kapitsa[143697]: --> All data devices are unavailable
Nov 25 18:40:39 np0005535838 systemd[1]: libpod-5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd.scope: Deactivated successfully.
Nov 25 18:40:39 np0005535838 systemd[1]: libpod-5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd.scope: Consumed 1.031s CPU time.
Nov 25 18:40:39 np0005535838 podman[143676]: 2025-11-25 23:40:39.347318763 +0000 UTC m=+1.248354397 container died 5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kapitsa, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 18:40:39 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b6ac20287d5b334a2b74cb68b479c82c14e45d92bfdcb387978e1969d6562d91-merged.mount: Deactivated successfully.
Nov 25 18:40:39 np0005535838 podman[143676]: 2025-11-25 23:40:39.414124317 +0000 UTC m=+1.315159931 container remove 5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:40:39 np0005535838 systemd[1]: libpod-conmon-5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd.scope: Deactivated successfully.
Nov 25 18:40:39 np0005535838 python3.9[144093]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v316: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:40 np0005535838 podman[144228]: 2025-11-25 23:40:40.223621134 +0000 UTC m=+0.068342756 container create b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:40:40 np0005535838 systemd[1]: Started libpod-conmon-b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573.scope.
Nov 25 18:40:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:40:40 np0005535838 podman[144228]: 2025-11-25 23:40:40.194906305 +0000 UTC m=+0.039627967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:40:40 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:40:40 np0005535838 podman[144228]: 2025-11-25 23:40:40.325304279 +0000 UTC m=+0.170025881 container init b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:40:40 np0005535838 podman[144228]: 2025-11-25 23:40:40.33215116 +0000 UTC m=+0.176872772 container start b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:40:40 np0005535838 podman[144228]: 2025-11-25 23:40:40.335966421 +0000 UTC m=+0.180688023 container attach b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 18:40:40 np0005535838 gallant_ride[144273]: 167 167
Nov 25 18:40:40 np0005535838 systemd[1]: libpod-b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573.scope: Deactivated successfully.
Nov 25 18:40:40 np0005535838 podman[144228]: 2025-11-25 23:40:40.342698989 +0000 UTC m=+0.187420601 container died b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 25 18:40:40 np0005535838 systemd[1]: var-lib-containers-storage-overlay-f9b8f5c7fb9dfbca8d5a3158e10704e875009fb248abcf49c021ca53584e4492-merged.mount: Deactivated successfully.
Nov 25 18:40:40 np0005535838 podman[144228]: 2025-11-25 23:40:40.390796469 +0000 UTC m=+0.235518091 container remove b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 18:40:40 np0005535838 systemd[1]: libpod-conmon-b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573.scope: Deactivated successfully.
Nov 25 18:40:40 np0005535838 podman[144365]: 2025-11-25 23:40:40.578860754 +0000 UTC m=+0.053608646 container create 0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tharp, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 18:40:40 np0005535838 systemd[1]: Started libpod-conmon-0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146.scope.
Nov 25 18:40:40 np0005535838 podman[144365]: 2025-11-25 23:40:40.554619775 +0000 UTC m=+0.029367757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:40:40 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:40:40 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f95f96fcb8752f12c5065bd0dd6a92dc0bec704c3ac7fd064726ba4dc2bb18c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:40:40 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f95f96fcb8752f12c5065bd0dd6a92dc0bec704c3ac7fd064726ba4dc2bb18c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:40:40 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f95f96fcb8752f12c5065bd0dd6a92dc0bec704c3ac7fd064726ba4dc2bb18c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:40:40 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f95f96fcb8752f12c5065bd0dd6a92dc0bec704c3ac7fd064726ba4dc2bb18c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:40:40 np0005535838 podman[144365]: 2025-11-25 23:40:40.680798067 +0000 UTC m=+0.155545969 container init 0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tharp, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:40:40 np0005535838 podman[144365]: 2025-11-25 23:40:40.697472127 +0000 UTC m=+0.172220049 container start 0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tharp, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:40:40 np0005535838 podman[144365]: 2025-11-25 23:40:40.701209836 +0000 UTC m=+0.175957738 container attach 0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 18:40:40 np0005535838 python3.9[144384]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]: {
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:    "0": [
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:        {
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "devices": [
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "/dev/loop3"
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            ],
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_name": "ceph_lv0",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_size": "21470642176",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "name": "ceph_lv0",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "tags": {
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.cluster_name": "ceph",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.crush_device_class": "",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.encrypted": "0",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.osd_id": "0",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.type": "block",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.vdo": "0"
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            },
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "type": "block",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "vg_name": "ceph_vg0"
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:        }
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:    ],
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:    "1": [
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:        {
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "devices": [
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "/dev/loop4"
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            ],
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_name": "ceph_lv1",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_size": "21470642176",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "name": "ceph_lv1",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "tags": {
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.cluster_name": "ceph",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.crush_device_class": "",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.encrypted": "0",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.osd_id": "1",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.type": "block",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.vdo": "0"
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            },
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "type": "block",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "vg_name": "ceph_vg1"
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:        }
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:    ],
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:    "2": [
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:        {
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "devices": [
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "/dev/loop5"
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            ],
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_name": "ceph_lv2",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_size": "21470642176",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "name": "ceph_lv2",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "tags": {
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.cluster_name": "ceph",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.crush_device_class": "",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.encrypted": "0",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.osd_id": "2",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.type": "block",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:                "ceph.vdo": "0"
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            },
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "type": "block",
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:            "vg_name": "ceph_vg2"
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:        }
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]:    ]
Nov 25 18:40:41 np0005535838 xenodochial_tharp[144389]: }
Nov 25 18:40:41 np0005535838 systemd[1]: libpod-0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146.scope: Deactivated successfully.
Nov 25 18:40:41 np0005535838 podman[144365]: 2025-11-25 23:40:41.477776073 +0000 UTC m=+0.952523965 container died 0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:40:41 np0005535838 systemd[1]: var-lib-containers-storage-overlay-7f95f96fcb8752f12c5065bd0dd6a92dc0bec704c3ac7fd064726ba4dc2bb18c-merged.mount: Deactivated successfully.
Nov 25 18:40:41 np0005535838 podman[144365]: 2025-11-25 23:40:41.552464526 +0000 UTC m=+1.027212418 container remove 0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:40:41 np0005535838 systemd[1]: libpod-conmon-0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146.scope: Deactivated successfully.
Nov 25 18:40:41 np0005535838 python3.9[144548]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:40:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v317: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:42 np0005535838 podman[144860]: 2025-11-25 23:40:42.415091505 +0000 UTC m=+0.068514620 container create 68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hopper, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:40:42 np0005535838 systemd[1]: Started libpod-conmon-68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236.scope.
Nov 25 18:40:42 np0005535838 podman[144860]: 2025-11-25 23:40:42.386901651 +0000 UTC m=+0.040324806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:40:42 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:40:42 np0005535838 podman[144860]: 2025-11-25 23:40:42.526076056 +0000 UTC m=+0.179499221 container init 68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hopper, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:40:42 np0005535838 podman[144860]: 2025-11-25 23:40:42.538590796 +0000 UTC m=+0.192013911 container start 68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hopper, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:40:42 np0005535838 podman[144860]: 2025-11-25 23:40:42.542712295 +0000 UTC m=+0.196135410 container attach 68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hopper, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 18:40:42 np0005535838 practical_hopper[144876]: 167 167
Nov 25 18:40:42 np0005535838 python3.9[144858]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:40:42 np0005535838 systemd[1]: libpod-68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236.scope: Deactivated successfully.
Nov 25 18:40:42 np0005535838 podman[144882]: 2025-11-25 23:40:42.615335233 +0000 UTC m=+0.042556905 container died 68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:40:42 np0005535838 systemd[1]: var-lib-containers-storage-overlay-9c50b5cd25fc6d714b7dff4a5768ebfff2d947f196e37967344a6426c5fb3fed-merged.mount: Deactivated successfully.
Nov 25 18:40:42 np0005535838 podman[144882]: 2025-11-25 23:40:42.657344293 +0000 UTC m=+0.084565915 container remove 68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 18:40:42 np0005535838 systemd[1]: libpod-conmon-68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236.scope: Deactivated successfully.
Nov 25 18:40:42 np0005535838 podman[144953]: 2025-11-25 23:40:42.907588861 +0000 UTC m=+0.071616903 container create 5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 18:40:42 np0005535838 systemd[1]: Started libpod-conmon-5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae.scope.
Nov 25 18:40:42 np0005535838 podman[144953]: 2025-11-25 23:40:42.879844918 +0000 UTC m=+0.043873010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:40:42 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:40:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e79b1073b562df039a976ed1f58ead015484db472e98805ab8230e9740bda87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:40:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e79b1073b562df039a976ed1f58ead015484db472e98805ab8230e9740bda87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:40:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e79b1073b562df039a976ed1f58ead015484db472e98805ab8230e9740bda87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:40:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e79b1073b562df039a976ed1f58ead015484db472e98805ab8230e9740bda87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:40:42 np0005535838 podman[144953]: 2025-11-25 23:40:42.992884853 +0000 UTC m=+0.156912875 container init 5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 18:40:43 np0005535838 podman[144953]: 2025-11-25 23:40:43.010621971 +0000 UTC m=+0.174649993 container start 5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elion, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 18:40:43 np0005535838 podman[144953]: 2025-11-25 23:40:43.013965399 +0000 UTC m=+0.177993441 container attach 5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:40:43 np0005535838 python3.9[145079]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:44 np0005535838 nervous_elion[145006]: {
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "osd_id": 2,
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "type": "bluestore"
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:    },
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "osd_id": 1,
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "type": "bluestore"
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:    },
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "osd_id": 0,
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:        "type": "bluestore"
Nov 25 18:40:44 np0005535838 nervous_elion[145006]:    }
Nov 25 18:40:44 np0005535838 nervous_elion[145006]: }
Nov 25 18:40:44 np0005535838 systemd[1]: libpod-5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae.scope: Deactivated successfully.
Nov 25 18:40:44 np0005535838 podman[144953]: 2025-11-25 23:40:44.070273134 +0000 UTC m=+1.234301176 container died 5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elion, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 18:40:44 np0005535838 systemd[1]: libpod-5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae.scope: Consumed 1.065s CPU time.
Nov 25 18:40:44 np0005535838 systemd[1]: var-lib-containers-storage-overlay-6e79b1073b562df039a976ed1f58ead015484db472e98805ab8230e9740bda87-merged.mount: Deactivated successfully.
Nov 25 18:40:44 np0005535838 podman[144953]: 2025-11-25 23:40:44.15000794 +0000 UTC m=+1.314035952 container remove 5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elion, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 18:40:44 np0005535838 systemd[1]: libpod-conmon-5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae.scope: Deactivated successfully.
Nov 25 18:40:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v318: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:44 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:40:44 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:40:44 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:40:44 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:40:44 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 282d49a5-96ba-4e26-b13d-d594e373ac69 does not exist
Nov 25 18:40:44 np0005535838 python3.9[145318]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.282999) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114045283062, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 845, "num_deletes": 250, "total_data_size": 792371, "memory_usage": 808712, "flush_reason": "Manual Compaction"}
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114045290890, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 507851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6778, "largest_seqno": 7622, "table_properties": {"data_size": 504359, "index_size": 1272, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8793, "raw_average_key_size": 19, "raw_value_size": 496971, "raw_average_value_size": 1106, "num_data_blocks": 60, "num_entries": 449, "num_filter_entries": 449, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113969, "oldest_key_time": 1764113969, "file_creation_time": 1764114045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 7916 microseconds, and 2073 cpu microseconds.
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.290926) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 507851 bytes OK
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.290943) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.292663) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.292675) EVENT_LOG_v1 {"time_micros": 1764114045292671, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.292689) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 788198, prev total WAL file size 788198, number of live WAL files 2.
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.293079) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(495KB)], [20(5460KB)]
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114045293109, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 6099484, "oldest_snapshot_seqno": -1}
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 2627 keys, 4461660 bytes, temperature: kUnknown
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114045312814, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 4461660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4442145, "index_size": 11854, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6597, "raw_key_size": 60833, "raw_average_key_size": 23, "raw_value_size": 4393082, "raw_average_value_size": 1672, "num_data_blocks": 536, "num_entries": 2627, "num_filter_entries": 2627, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764114045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.312963) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 4461660 bytes
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.314219) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 308.8 rd, 225.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 5.3 +0.0 blob) out(4.3 +0.0 blob), read-write-amplify(20.8) write-amplify(8.8) OK, records in: 3106, records dropped: 479 output_compression: NoCompression
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.314235) EVENT_LOG_v1 {"time_micros": 1764114045314227, "job": 6, "event": "compaction_finished", "compaction_time_micros": 19750, "compaction_time_cpu_micros": 10477, "output_level": 6, "num_output_files": 1, "total_output_size": 4461660, "num_input_records": 3106, "num_output_records": 2627, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114045314383, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114045315111, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.293021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.315159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.315164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.315165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.315167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:40:45 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.315188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:40:46 np0005535838 python3.9[145471]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:40:46 np0005535838 ovs-vsctl[145472]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 25 18:40:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v319: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:46 np0005535838 python3.9[145624]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:40:47 np0005535838 python3.9[145779]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:40:47 np0005535838 ovs-vsctl[145780]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 25 18:40:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v320: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:48 np0005535838 python3.9[145930]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:40:49 np0005535838 python3.9[146084]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:40:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:50 np0005535838 python3.9[146236]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:40:50 np0005535838 python3.9[146314]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:40:51 np0005535838 python3.9[146466]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:52 np0005535838 python3.9[146544]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:40:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v322: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:52 np0005535838 python3.9[146696]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:53 np0005535838 python3.9[146848]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v323: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:54 np0005535838 python3.9[146926]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:55 np0005535838 python3.9[147078]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:40:55 np0005535838 python3.9[147156]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:40:56
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['vms', '.mgr', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'backups']
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:40:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v324: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:56 np0005535838 python3.9[147308]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:40:56 np0005535838 systemd[1]: Reloading.
Nov 25 18:40:56 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:40:56 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:40:57 np0005535838 python3.9[147497]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:40:58 np0005535838 python3.9[147575]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:40:59 np0005535838 python3.9[147727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:40:59 np0005535838 python3.9[147805]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:41:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v326: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:41:00 np0005535838 python3.9[147957]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:41:00 np0005535838 systemd[1]: Reloading.
Nov 25 18:41:01 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:41:01 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:41:01 np0005535838 systemd[1]: Starting Create netns directory...
Nov 25 18:41:01 np0005535838 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 18:41:01 np0005535838 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 18:41:01 np0005535838 systemd[1]: Finished Create netns directory.
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:41:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:41:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v327: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:02 np0005535838 python3.9[148150]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:41:03 np0005535838 python3.9[148302]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:41:03 np0005535838 python3.9[148425]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114062.4464407-468-242865811011017/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:41:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v328: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:04 np0005535838 python3.9[148577]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:41:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:41:05 np0005535838 python3.9[148729]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:41:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v329: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:06 np0005535838 python3.9[148852]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114065.1357996-493-123878201658838/.source.json _original_basename=.4hjznkpx follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:41:07 np0005535838 python3.9[149004]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:41:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:09 np0005535838 python3.9[149431]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 25 18:41:10 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:41:10 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 1768 writes, 7654 keys, 1768 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s#012Cumulative WAL: 1768 writes, 1768 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1768 writes, 7654 keys, 1768 commit groups, 1.0 writes per commit group, ingest: 7.98 MB, 0.01 MB/s#012Interval WAL: 1768 writes, 1768 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    125.6      0.05              0.02         3    0.016       0      0       0.0       0.0#012  L6      1/0    4.25 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    180.5    154.6      0.06              0.04         2    0.031    6053    769       0.0       0.0#012 Sum      1/0    4.25 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    103.0    142.1      0.11              0.06         5    0.022    6053    769       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    105.7    145.4      0.11              0.06         4    0.026    6053    769       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    180.5    154.6      0.06              0.04         2    0.031    6053    769       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    132.4      0.04              0.02         2    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.8      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.006, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.1 seconds#012Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f0edcc31f0#2 capacity: 308.00 MB usage: 574.48 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(38,504.27 KB,0.159885%) FilterBlock(6,24.30 KB,0.00770371%) IndexBlock(6,45.92 KB,0.0145603%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 18:41:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v331: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:41:10 np0005535838 python3.9[149583]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 18:41:12 np0005535838 python3.9[149735]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 18:41:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v332: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:13 np0005535838 python3[149913]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 18:41:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v333: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:41:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v334: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v335: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:18 np0005535838 podman[149926]: 2025-11-25 23:41:18.760340429 +0000 UTC m=+4.900654331 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 18:41:18 np0005535838 podman[150044]: 2025-11-25 23:41:18.941006169 +0000 UTC m=+0.055269042 container create 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:41:18 np0005535838 podman[150044]: 2025-11-25 23:41:18.909455839 +0000 UTC m=+0.023718802 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 18:41:18 np0005535838 python3[149913]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 18:41:20 np0005535838 python3.9[150234]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:41:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v336: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:41:21 np0005535838 python3.9[150389]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:41:21 np0005535838 python3.9[150465]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:41:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v337: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:22 np0005535838 python3.9[150617]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764114082.0639093-581-178525452514858/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:41:23 np0005535838 python3.9[150693]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 18:41:23 np0005535838 systemd[1]: Reloading.
Nov 25 18:41:23 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:41:23 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:41:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v338: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:24 np0005535838 python3.9[150804]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:41:24 np0005535838 systemd[1]: Reloading.
Nov 25 18:41:24 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:41:24 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:41:24 np0005535838 systemd[1]: Starting ovn_controller container...
Nov 25 18:41:25 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:41:25 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/627ed15ed15167141774381955816e4567fc2f6e5c3b0e5c37325dd8a7c71b23/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 25 18:41:25 np0005535838 systemd[1]: Started /usr/bin/podman healthcheck run 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58.
Nov 25 18:41:25 np0005535838 podman[150845]: 2025-11-25 23:41:25.199706784 +0000 UTC m=+0.184227186 container init 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: + sudo -E kolla_set_configs
Nov 25 18:41:25 np0005535838 podman[150845]: 2025-11-25 23:41:25.235364854 +0000 UTC m=+0.219885206 container start 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, container_name=ovn_controller)
Nov 25 18:41:25 np0005535838 edpm-start-podman-container[150845]: ovn_controller
Nov 25 18:41:25 np0005535838 systemd[1]: Created slice User Slice of UID 0.
Nov 25 18:41:25 np0005535838 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 25 18:41:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:41:25 np0005535838 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 25 18:41:25 np0005535838 systemd[1]: Starting User Manager for UID 0...
Nov 25 18:41:25 np0005535838 edpm-start-podman-container[150844]: Creating additional drop-in dependency for "ovn_controller" (668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58)
Nov 25 18:41:25 np0005535838 podman[150867]: 2025-11-25 23:41:25.360123385 +0000 UTC m=+0.107895203 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 18:41:25 np0005535838 systemd[1]: 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58-55876ed103f2e330.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 18:41:25 np0005535838 systemd[1]: 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58-55876ed103f2e330.service: Failed with result 'exit-code'.
Nov 25 18:41:25 np0005535838 systemd[1]: Reloading.
Nov 25 18:41:25 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:41:25 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:41:25 np0005535838 systemd[150898]: Queued start job for default target Main User Target.
Nov 25 18:41:25 np0005535838 systemd[150898]: Created slice User Application Slice.
Nov 25 18:41:25 np0005535838 systemd[150898]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 25 18:41:25 np0005535838 systemd[150898]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 18:41:25 np0005535838 systemd[150898]: Reached target Paths.
Nov 25 18:41:25 np0005535838 systemd[150898]: Reached target Timers.
Nov 25 18:41:25 np0005535838 systemd[150898]: Starting D-Bus User Message Bus Socket...
Nov 25 18:41:25 np0005535838 systemd[150898]: Starting Create User's Volatile Files and Directories...
Nov 25 18:41:25 np0005535838 systemd[150898]: Finished Create User's Volatile Files and Directories.
Nov 25 18:41:25 np0005535838 systemd[150898]: Listening on D-Bus User Message Bus Socket.
Nov 25 18:41:25 np0005535838 systemd[150898]: Reached target Sockets.
Nov 25 18:41:25 np0005535838 systemd[150898]: Reached target Basic System.
Nov 25 18:41:25 np0005535838 systemd[150898]: Reached target Main User Target.
Nov 25 18:41:25 np0005535838 systemd[150898]: Startup finished in 188ms.
Nov 25 18:41:25 np0005535838 systemd[1]: Started User Manager for UID 0.
Nov 25 18:41:25 np0005535838 systemd[1]: Started ovn_controller container.
Nov 25 18:41:25 np0005535838 systemd[1]: Started Session c1 of User root.
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: INFO:__main__:Validating config file
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: INFO:__main__:Writing out command to execute
Nov 25 18:41:25 np0005535838 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: ++ cat /run_command
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: + ARGS=
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: + sudo kolla_copy_cacerts
Nov 25 18:41:25 np0005535838 systemd[1]: Started Session c2 of User root.
Nov 25 18:41:25 np0005535838 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: + [[ ! -n '' ]]
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: + . kolla_extend_start
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: + umask 0022
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 25 18:41:25 np0005535838 NetworkManager[49538]: <info>  [1764114085.8493] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 25 18:41:25 np0005535838 NetworkManager[49538]: <info>  [1764114085.8499] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 18:41:25 np0005535838 NetworkManager[49538]: <info>  [1764114085.8508] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 25 18:41:25 np0005535838 NetworkManager[49538]: <info>  [1764114085.8513] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 25 18:41:25 np0005535838 NetworkManager[49538]: <info>  [1764114085.8516] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 18:41:25 np0005535838 kernel: br-int: entered promiscuous mode
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 25 18:41:25 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:25Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 25 18:41:25 np0005535838 systemd-udevd[150991]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 18:41:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:41:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:41:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:41:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:41:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:41:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:41:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v339: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:26 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:26Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 18:41:26 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:26Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 18:41:26 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:26Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 18:41:26 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:26Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 18:41:26 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:26Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 18:41:26 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:26Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 18:41:26 np0005535838 NetworkManager[49538]: <info>  [1764114086.2819] manager: (ovn-c439b2-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 25 18:41:26 np0005535838 kernel: genev_sys_6081: entered promiscuous mode
Nov 25 18:41:26 np0005535838 systemd-udevd[151005]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 18:41:26 np0005535838 NetworkManager[49538]: <info>  [1764114086.2984] device (genev_sys_6081): carrier: link connected
Nov 25 18:41:26 np0005535838 NetworkManager[49538]: <info>  [1764114086.2987] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 25 18:41:26 np0005535838 python3.9[151121]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:41:26 np0005535838 ovs-vsctl[151123]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 25 18:41:27 np0005535838 python3.9[151275]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:41:27 np0005535838 ovs-vsctl[151277]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 25 18:41:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v340: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:28 np0005535838 python3.9[151437]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:41:28 np0005535838 ovs-vsctl[151438]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 25 18:41:29 np0005535838 systemd[1]: session-46.scope: Deactivated successfully.
Nov 25 18:41:29 np0005535838 systemd[1]: session-46.scope: Consumed 1min 5.700s CPU time.
Nov 25 18:41:29 np0005535838 systemd-logind[789]: Session 46 logged out. Waiting for processes to exit.
Nov 25 18:41:29 np0005535838 systemd-logind[789]: Removed session 46.
Nov 25 18:41:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v341: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:41:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v343: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:34 np0005535838 systemd-logind[789]: New session 48 of user zuul.
Nov 25 18:41:34 np0005535838 systemd[1]: Started Session 48 of User zuul.
Nov 25 18:41:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:41:35 np0005535838 python3.9[151616]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:41:35 np0005535838 systemd[1]: Stopping User Manager for UID 0...
Nov 25 18:41:35 np0005535838 systemd[150898]: Activating special unit Exit the Session...
Nov 25 18:41:35 np0005535838 systemd[150898]: Stopped target Main User Target.
Nov 25 18:41:35 np0005535838 systemd[150898]: Stopped target Basic System.
Nov 25 18:41:35 np0005535838 systemd[150898]: Stopped target Paths.
Nov 25 18:41:35 np0005535838 systemd[150898]: Stopped target Sockets.
Nov 25 18:41:35 np0005535838 systemd[150898]: Stopped target Timers.
Nov 25 18:41:35 np0005535838 systemd[150898]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 18:41:35 np0005535838 systemd[150898]: Closed D-Bus User Message Bus Socket.
Nov 25 18:41:35 np0005535838 systemd[150898]: Stopped Create User's Volatile Files and Directories.
Nov 25 18:41:35 np0005535838 systemd[150898]: Removed slice User Application Slice.
Nov 25 18:41:35 np0005535838 systemd[150898]: Reached target Shutdown.
Nov 25 18:41:35 np0005535838 systemd[150898]: Finished Exit the Session.
Nov 25 18:41:35 np0005535838 systemd[150898]: Reached target Exit the Session.
Nov 25 18:41:35 np0005535838 systemd[1]: user@0.service: Deactivated successfully.
Nov 25 18:41:35 np0005535838 systemd[1]: Stopped User Manager for UID 0.
Nov 25 18:41:35 np0005535838 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 25 18:41:35 np0005535838 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 25 18:41:35 np0005535838 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 25 18:41:35 np0005535838 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 25 18:41:35 np0005535838 systemd[1]: Removed slice User Slice of UID 0.
Nov 25 18:41:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:36 np0005535838 python3.9[151776]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:41:37 np0005535838 python3.9[151928]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:41:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:38 np0005535838 python3.9[152080]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:41:39 np0005535838 python3.9[152234]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:41:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v346: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:40 np0005535838 python3.9[152386]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:41:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:41:41 np0005535838 python3.9[152536]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:41:42 np0005535838 python3.9[152688]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 18:41:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v347: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:43 np0005535838 python3.9[152838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:41:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:44 np0005535838 python3.9[152960]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114102.9222944-86-82250326519734/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:41:45 np0005535838 python3.9[153228]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:41:45 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 8a7a95d5-eb27-4e95-9106-7eb8b662ae20 does not exist
Nov 25 18:41:45 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev cce6d471-92ff-4ceb-b40a-7dbbb2a362ed does not exist
Nov 25 18:41:45 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 34eb893b-d82d-46a7-9eb9-2dad6073013b does not exist
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:41:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:41:45 np0005535838 python3.9[153463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114104.6651092-101-93927688971522/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:41:46 np0005535838 podman[153516]: 2025-11-25 23:41:46.046338604 +0000 UTC m=+0.049496819 container create b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 18:41:46 np0005535838 systemd[1]: Started libpod-conmon-b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff.scope.
Nov 25 18:41:46 np0005535838 podman[153516]: 2025-11-25 23:41:46.025490839 +0000 UTC m=+0.028649054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:41:46 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:41:46 np0005535838 podman[153516]: 2025-11-25 23:41:46.144563669 +0000 UTC m=+0.147721884 container init b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:41:46 np0005535838 podman[153516]: 2025-11-25 23:41:46.159547668 +0000 UTC m=+0.162705923 container start b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:41:46 np0005535838 podman[153516]: 2025-11-25 23:41:46.163799581 +0000 UTC m=+0.166957806 container attach b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euler, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 18:41:46 np0005535838 compassionate_euler[153543]: 167 167
Nov 25 18:41:46 np0005535838 systemd[1]: libpod-b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff.scope: Deactivated successfully.
Nov 25 18:41:46 np0005535838 podman[153516]: 2025-11-25 23:41:46.167978432 +0000 UTC m=+0.171136667 container died b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 18:41:46 np0005535838 systemd[1]: var-lib-containers-storage-overlay-542aa3fd2f2f76c5cda50bc0f50f7eb12aa584fba27f1b2bf1e0cf705f250614-merged.mount: Deactivated successfully.
Nov 25 18:41:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v349: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:46 np0005535838 podman[153516]: 2025-11-25 23:41:46.222528635 +0000 UTC m=+0.225686870 container remove b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euler, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:41:46 np0005535838 systemd[1]: libpod-conmon-b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff.scope: Deactivated successfully.
Nov 25 18:41:46 np0005535838 podman[153642]: 2025-11-25 23:41:46.44172881 +0000 UTC m=+0.052615521 container create f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:41:46 np0005535838 systemd[1]: Started libpod-conmon-f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b.scope.
Nov 25 18:41:46 np0005535838 podman[153642]: 2025-11-25 23:41:46.413013286 +0000 UTC m=+0.023899917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:41:46 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:41:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc54f6fa6afcc21cd9c25180986f7cf34da41a0571e4fdaf6e4d7e612ddb52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:41:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc54f6fa6afcc21cd9c25180986f7cf34da41a0571e4fdaf6e4d7e612ddb52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:41:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc54f6fa6afcc21cd9c25180986f7cf34da41a0571e4fdaf6e4d7e612ddb52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:41:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc54f6fa6afcc21cd9c25180986f7cf34da41a0571e4fdaf6e4d7e612ddb52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:41:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc54f6fa6afcc21cd9c25180986f7cf34da41a0571e4fdaf6e4d7e612ddb52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:41:46 np0005535838 podman[153642]: 2025-11-25 23:41:46.554446141 +0000 UTC m=+0.165332772 container init f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 18:41:46 np0005535838 podman[153642]: 2025-11-25 23:41:46.569964734 +0000 UTC m=+0.180851295 container start f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 18:41:46 np0005535838 podman[153642]: 2025-11-25 23:41:46.574162056 +0000 UTC m=+0.185048707 container attach f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 25 18:41:46 np0005535838 python3.9[153715]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:41:47 np0005535838 magical_villani[153676]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:41:47 np0005535838 magical_villani[153676]: --> relative data size: 1.0
Nov 25 18:41:47 np0005535838 magical_villani[153676]: --> All data devices are unavailable
Nov 25 18:41:47 np0005535838 podman[153642]: 2025-11-25 23:41:47.817146758 +0000 UTC m=+1.428033319 container died f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 18:41:47 np0005535838 systemd[1]: libpod-f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b.scope: Deactivated successfully.
Nov 25 18:41:47 np0005535838 systemd[1]: libpod-f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b.scope: Consumed 1.185s CPU time.
Nov 25 18:41:47 np0005535838 systemd[1]: var-lib-containers-storage-overlay-d4fc54f6fa6afcc21cd9c25180986f7cf34da41a0571e4fdaf6e4d7e612ddb52-merged.mount: Deactivated successfully.
Nov 25 18:41:47 np0005535838 python3.9[153819]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:41:47 np0005535838 podman[153642]: 2025-11-25 23:41:47.90396318 +0000 UTC m=+1.514849741 container remove f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:41:47 np0005535838 systemd[1]: libpod-conmon-f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b.scope: Deactivated successfully.
Nov 25 18:41:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:48 np0005535838 podman[153978]: 2025-11-25 23:41:48.682124236 +0000 UTC m=+0.066645425 container create 73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:41:48 np0005535838 systemd[1]: Started libpod-conmon-73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe.scope.
Nov 25 18:41:48 np0005535838 podman[153978]: 2025-11-25 23:41:48.652256021 +0000 UTC m=+0.036777270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:41:48 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:41:48 np0005535838 podman[153978]: 2025-11-25 23:41:48.784909093 +0000 UTC m=+0.169430272 container init 73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:41:48 np0005535838 podman[153978]: 2025-11-25 23:41:48.795652929 +0000 UTC m=+0.180174098 container start 73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:41:48 np0005535838 podman[153978]: 2025-11-25 23:41:48.799434269 +0000 UTC m=+0.183955448 container attach 73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 18:41:48 np0005535838 funny_matsumoto[153994]: 167 167
Nov 25 18:41:48 np0005535838 systemd[1]: libpod-73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe.scope: Deactivated successfully.
Nov 25 18:41:48 np0005535838 podman[153978]: 2025-11-25 23:41:48.807666708 +0000 UTC m=+0.192187907 container died 73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 18:41:48 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b77008bb2652e724603a4d4ae5689974f2d6bdce1e54d30f437633702af31dbe-merged.mount: Deactivated successfully.
Nov 25 18:41:48 np0005535838 podman[153978]: 2025-11-25 23:41:48.862625851 +0000 UTC m=+0.247147040 container remove 73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:41:48 np0005535838 systemd[1]: libpod-conmon-73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe.scope: Deactivated successfully.
Nov 25 18:41:49 np0005535838 podman[154019]: 2025-11-25 23:41:49.108246831 +0000 UTC m=+0.069784369 container create 9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 18:41:49 np0005535838 podman[154019]: 2025-11-25 23:41:49.079503015 +0000 UTC m=+0.041040693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:41:49 np0005535838 systemd[1]: Started libpod-conmon-9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e.scope.
Nov 25 18:41:49 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:41:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcae8f743279f1f664315ffab0c3d807a898848aaf2bc3be773f731878c45bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:41:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcae8f743279f1f664315ffab0c3d807a898848aaf2bc3be773f731878c45bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:41:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcae8f743279f1f664315ffab0c3d807a898848aaf2bc3be773f731878c45bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:41:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcae8f743279f1f664315ffab0c3d807a898848aaf2bc3be773f731878c45bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:41:49 np0005535838 podman[154019]: 2025-11-25 23:41:49.27612098 +0000 UTC m=+0.237658548 container init 9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 18:41:49 np0005535838 podman[154019]: 2025-11-25 23:41:49.287812611 +0000 UTC m=+0.249350119 container start 9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 18:41:49 np0005535838 podman[154019]: 2025-11-25 23:41:49.291267084 +0000 UTC m=+0.252804682 container attach 9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]: {
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:    "0": [
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:        {
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "devices": [
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "/dev/loop3"
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            ],
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_name": "ceph_lv0",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_size": "21470642176",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "name": "ceph_lv0",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "tags": {
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.cluster_name": "ceph",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.crush_device_class": "",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.encrypted": "0",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.osd_id": "0",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.type": "block",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.vdo": "0"
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            },
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "type": "block",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "vg_name": "ceph_vg0"
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:        }
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:    ],
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:    "1": [
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:        {
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "devices": [
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "/dev/loop4"
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            ],
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_name": "ceph_lv1",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_size": "21470642176",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "name": "ceph_lv1",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "tags": {
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.cluster_name": "ceph",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.crush_device_class": "",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.encrypted": "0",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.osd_id": "1",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.type": "block",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.vdo": "0"
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            },
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "type": "block",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "vg_name": "ceph_vg1"
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:        }
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:    ],
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:    "2": [
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:        {
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "devices": [
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "/dev/loop5"
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            ],
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_name": "ceph_lv2",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_size": "21470642176",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "name": "ceph_lv2",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "tags": {
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.cluster_name": "ceph",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.crush_device_class": "",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.encrypted": "0",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.osd_id": "2",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.type": "block",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:                "ceph.vdo": "0"
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            },
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "type": "block",
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:            "vg_name": "ceph_vg2"
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:        }
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]:    ]
Nov 25 18:41:50 np0005535838 sweet_bhabha[154042]: }
Nov 25 18:41:50 np0005535838 systemd[1]: libpod-9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e.scope: Deactivated successfully.
Nov 25 18:41:50 np0005535838 podman[154019]: 2025-11-25 23:41:50.137457871 +0000 UTC m=+1.098995409 container died 9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:41:50 np0005535838 systemd[1]: var-lib-containers-storage-overlay-6bcae8f743279f1f664315ffab0c3d807a898848aaf2bc3be773f731878c45bf-merged.mount: Deactivated successfully.
Nov 25 18:41:50 np0005535838 podman[154019]: 2025-11-25 23:41:50.201029504 +0000 UTC m=+1.162567012 container remove 9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 18:41:50 np0005535838 systemd[1]: libpod-conmon-9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e.scope: Deactivated successfully.
Nov 25 18:41:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v351: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:41:50 np0005535838 python3.9[154208]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 18:41:50 np0005535838 podman[154350]: 2025-11-25 23:41:50.898098292 +0000 UTC m=+0.055951531 container create 07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:41:50 np0005535838 systemd[1]: Started libpod-conmon-07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3.scope.
Nov 25 18:41:50 np0005535838 podman[154350]: 2025-11-25 23:41:50.878473469 +0000 UTC m=+0.036326738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:41:50 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:41:50 np0005535838 podman[154350]: 2025-11-25 23:41:50.998657569 +0000 UTC m=+0.156510908 container init 07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:41:51 np0005535838 podman[154350]: 2025-11-25 23:41:51.010426802 +0000 UTC m=+0.168280071 container start 07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:41:51 np0005535838 podman[154350]: 2025-11-25 23:41:51.014767568 +0000 UTC m=+0.172620877 container attach 07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chandrasekhar, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 18:41:51 np0005535838 boring_chandrasekhar[154366]: 167 167
Nov 25 18:41:51 np0005535838 systemd[1]: libpod-07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3.scope: Deactivated successfully.
Nov 25 18:41:51 np0005535838 podman[154350]: 2025-11-25 23:41:51.018795535 +0000 UTC m=+0.176648794 container died 07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 18:41:51 np0005535838 systemd[1]: var-lib-containers-storage-overlay-7a46b925513607ef717b033115caca92c974bd6ac7f1c32c1f0ca9eb951ed1c2-merged.mount: Deactivated successfully.
Nov 25 18:41:51 np0005535838 podman[154350]: 2025-11-25 23:41:51.07642275 +0000 UTC m=+0.234276019 container remove 07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:41:51 np0005535838 systemd[1]: libpod-conmon-07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3.scope: Deactivated successfully.
Nov 25 18:41:51 np0005535838 podman[154390]: 2025-11-25 23:41:51.3487462 +0000 UTC m=+0.070328574 container create 364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 18:41:51 np0005535838 systemd[1]: Started libpod-conmon-364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4.scope.
Nov 25 18:41:51 np0005535838 podman[154390]: 2025-11-25 23:41:51.32208108 +0000 UTC m=+0.043663454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:41:51 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:41:51 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea96ebd31a9cf08b17d339236a69d58a518d55a42701d092ef3daacd539c60b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:41:51 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea96ebd31a9cf08b17d339236a69d58a518d55a42701d092ef3daacd539c60b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:41:51 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea96ebd31a9cf08b17d339236a69d58a518d55a42701d092ef3daacd539c60b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:41:51 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea96ebd31a9cf08b17d339236a69d58a518d55a42701d092ef3daacd539c60b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:41:51 np0005535838 podman[154390]: 2025-11-25 23:41:51.465100067 +0000 UTC m=+0.186682481 container init 364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 25 18:41:51 np0005535838 podman[154390]: 2025-11-25 23:41:51.476605763 +0000 UTC m=+0.198188137 container start 364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:41:51 np0005535838 podman[154390]: 2025-11-25 23:41:51.480731543 +0000 UTC m=+0.202313907 container attach 364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 18:41:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v352: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:52 np0005535838 python3.9[154572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]: {
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "osd_id": 2,
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "type": "bluestore"
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:    },
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "osd_id": 1,
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "type": "bluestore"
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:    },
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "osd_id": 0,
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:        "type": "bluestore"
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]:    }
Nov 25 18:41:52 np0005535838 intelligent_pascal[154406]: }
Nov 25 18:41:52 np0005535838 systemd[1]: libpod-364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4.scope: Deactivated successfully.
Nov 25 18:41:52 np0005535838 systemd[1]: libpod-364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4.scope: Consumed 1.171s CPU time.
Nov 25 18:41:52 np0005535838 podman[154390]: 2025-11-25 23:41:52.643367376 +0000 UTC m=+1.364949840 container died 364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 18:41:52 np0005535838 systemd[1]: var-lib-containers-storage-overlay-7ea96ebd31a9cf08b17d339236a69d58a518d55a42701d092ef3daacd539c60b-merged.mount: Deactivated successfully.
Nov 25 18:41:52 np0005535838 podman[154390]: 2025-11-25 23:41:52.718227689 +0000 UTC m=+1.439810033 container remove 364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:41:52 np0005535838 systemd[1]: libpod-conmon-364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4.scope: Deactivated successfully.
Nov 25 18:41:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:41:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:41:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:41:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:41:52 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev f8368813-b703-4d44-856d-af49e7ce8254 does not exist
Nov 25 18:41:53 np0005535838 python3.9[154771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114111.9067004-138-510763543191/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:41:53 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:41:53 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:41:54 np0005535838 python3.9[154921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:41:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v353: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:54 np0005535838 python3.9[155042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114113.45278-138-204972015910014/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:41:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:41:55 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:55Z|00025|memory|INFO|17280 kB peak resident set size after 30.1 seconds
Nov 25 18:41:55 np0005535838 ovn_controller[150860]: 2025-11-25T23:41:55Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 25 18:41:55 np0005535838 podman[155166]: 2025-11-25 23:41:55.983567902 +0000 UTC m=+0.145209446 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:41:56
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'volumes', 'vms', 'images']
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:41:56 np0005535838 python3.9[155204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:41:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v354: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:56 np0005535838 python3.9[155339]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114115.4942293-182-274171676532637/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:41:57 np0005535838 python3.9[155489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:41:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v355: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:41:58 np0005535838 python3.9[155610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114116.948464-182-201418477778343/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:41:59 np0005535838 python3.9[155760]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:42:00 np0005535838 python3.9[155916]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:42:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v356: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:42:00 np0005535838 python3.9[156068]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:42:01 np0005535838 python3.9[156146]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:42:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:42:01 np0005535838 python3.9[156298]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:42:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v357: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:02 np0005535838 python3.9[156376]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:42:03 np0005535838 python3.9[156528]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:42:04 np0005535838 python3.9[156680]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:42:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:04 np0005535838 python3.9[156758]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:42:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:42:05 np0005535838 python3.9[156910]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:42:05 np0005535838 python3.9[156988]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:42:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v359: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:06 np0005535838 python3.9[157140]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:42:06 np0005535838 systemd[1]: Reloading.
Nov 25 18:42:07 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:42:07 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:42:08 np0005535838 python3.9[157329]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:42:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v360: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:08 np0005535838 python3.9[157407]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:42:09 np0005535838 python3.9[157559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:42:09 np0005535838 python3.9[157637]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:42:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v361: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:42:10 np0005535838 python3.9[157791]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:42:10 np0005535838 systemd[1]: Reloading.
Nov 25 18:42:10 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:42:10 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:42:11 np0005535838 systemd[1]: Starting Create netns directory...
Nov 25 18:42:11 np0005535838 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 18:42:11 np0005535838 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 18:42:11 np0005535838 systemd[1]: Finished Create netns directory.
Nov 25 18:42:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:12 np0005535838 python3.9[157985]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:42:13 np0005535838 python3.9[158137]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:42:13 np0005535838 python3.9[158260]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114132.5297968-333-6380289767988/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:42:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v363: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:15 np0005535838 python3.9[158412]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:42:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:42:15 np0005535838 python3.9[158564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:42:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v364: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:16 np0005535838 python3.9[158687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114135.3597665-358-271186454324317/.source.json _original_basename=.og3_1hbb follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:42:17 np0005535838 python3.9[158839]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:42:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v365: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:20 np0005535838 python3.9[159266]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 25 18:42:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v366: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:42:21 np0005535838 python3.9[159418]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 18:42:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:22 np0005535838 python3.9[159570]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 18:42:24 np0005535838 python3[159749]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 18:42:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v368: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:42:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:42:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:42:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:42:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:42:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:42:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:42:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:27 np0005535838 podman[159812]: 2025-11-25 23:42:27.640665986 +0000 UTC m=+1.464897029 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 18:42:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v370: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v371: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:42:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v372: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:32 np0005535838 podman[159763]: 2025-11-25 23:42:32.687879205 +0000 UTC m=+8.604175134 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 18:42:32 np0005535838 podman[159906]: 2025-11-25 23:42:32.928256653 +0000 UTC m=+0.071665813 container create 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 18:42:32 np0005535838 podman[159906]: 2025-11-25 23:42:32.895123499 +0000 UTC m=+0.038532669 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 18:42:32 np0005535838 python3[159749]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 18:42:33 np0005535838 python3.9[160096]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:42:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v373: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:34 np0005535838 python3.9[160250]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:42:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:42:35 np0005535838 python3.9[160326]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:42:36 np0005535838 auditd[698]: Audit daemon rotating log files
Nov 25 18:42:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:36 np0005535838 python3.9[160477]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764114155.5486336-446-252190717482420/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:42:37 np0005535838 python3.9[160553]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 18:42:37 np0005535838 systemd[1]: Reloading.
Nov 25 18:42:37 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:42:37 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:42:38 np0005535838 python3.9[160664]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:42:38 np0005535838 systemd[1]: Reloading.
Nov 25 18:42:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v375: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:38 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:42:38 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:42:38 np0005535838 systemd[1]: Starting ovn_metadata_agent container...
Nov 25 18:42:38 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:42:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f4d416747d1b87463ccd2a289bdde2cb2156a91081f47ed60d52fd569c26de4/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 25 18:42:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f4d416747d1b87463ccd2a289bdde2cb2156a91081f47ed60d52fd569c26de4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 18:42:38 np0005535838 systemd[1]: Started /usr/bin/podman healthcheck run 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9.
Nov 25 18:42:38 np0005535838 podman[160705]: 2025-11-25 23:42:38.779837522 +0000 UTC m=+0.243921502 container init 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: + sudo -E kolla_set_configs
Nov 25 18:42:38 np0005535838 podman[160705]: 2025-11-25 23:42:38.822049698 +0000 UTC m=+0.286133598 container start 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 18:42:38 np0005535838 edpm-start-podman-container[160705]: ovn_metadata_agent
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Validating config file
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Copying service configuration files
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Writing out command to execute
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: ++ cat /run_command
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: + CMD=neutron-ovn-metadata-agent
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: + ARGS=
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: + sudo kolla_copy_cacerts
Nov 25 18:42:38 np0005535838 edpm-start-podman-container[160704]: Creating additional drop-in dependency for "ovn_metadata_agent" (9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9)
Nov 25 18:42:38 np0005535838 podman[160727]: 2025-11-25 23:42:38.948790416 +0000 UTC m=+0.106188471 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: + [[ ! -n '' ]]
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: + . kolla_extend_start
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: Running command: 'neutron-ovn-metadata-agent'
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: + umask 0022
Nov 25 18:42:38 np0005535838 ovn_metadata_agent[160720]: + exec neutron-ovn-metadata-agent
Nov 25 18:42:38 np0005535838 systemd[1]: Reloading.
Nov 25 18:42:39 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:42:39 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:42:39 np0005535838 systemd[1]: Started ovn_metadata_agent container.
Nov 25 18:42:39 np0005535838 systemd[1]: session-48.scope: Deactivated successfully.
Nov 25 18:42:39 np0005535838 systemd[1]: session-48.scope: Consumed 1min 2.729s CPU time.
Nov 25 18:42:39 np0005535838 systemd-logind[789]: Session 48 logged out. Waiting for processes to exit.
Nov 25 18:42:39 np0005535838 systemd-logind[789]: Removed session 48.
Nov 25 18:42:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v376: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.712 160725 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.712 160725 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.713 160725 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.713 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.713 160725 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.713 160725 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.722 160725 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.722 160725 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.722 160725 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.722 160725 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.722 160725 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.722 160725 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.722 160725 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.730 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.730 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.730 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.730 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.730 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.730 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.730 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.734 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.734 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.734 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.734 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.734 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.734 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.745 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.745 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.745 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.745 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.745 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.745 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.745 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.748 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.758 160725 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.758 160725 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.758 160725 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.758 160725 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.759 160725 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.772 160725 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 2ba84045-48af-49e3-86f7-35b32300977f (UUID: 2ba84045-48af-49e3-86f7-35b32300977f) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.801 160725 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.802 160725 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.802 160725 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.802 160725 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.806 160725 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.812 160725 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.819 160725 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '2ba84045-48af-49e3-86f7-35b32300977f'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f9d3abb7cd0>], external_ids={}, name=2ba84045-48af-49e3-86f7-35b32300977f, nb_cfg_timestamp=1764114093880, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.820 160725 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f9d3abbab20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.821 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.821 160725 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.821 160725 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.821 160725 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.827 160725 DEBUG oslo_service.service [-] Started child 160834 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.830 160725 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp3no26x0o/privsep.sock']#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.831 160834 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-365978'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.855 160834 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.855 160834 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.856 160834 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.859 160834 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.865 160834 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 25 18:42:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.873 160834 INFO eventlet.wsgi.server [-] (160834) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 25 18:42:41 np0005535838 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 25 18:42:41 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:41.549 160725 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 18:42:41 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:41.551 160725 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp3no26x0o/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 25 18:42:41 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:41.407 160839 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 18:42:41 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:41.414 160839 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 18:42:41 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:41.417 160839 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 25 18:42:41 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:41.418 160839 INFO oslo.privsep.daemon [-] privsep daemon running as pid 160839#033[00m
Nov 25 18:42:41 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:41.556 160839 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a75ff2-3ffe-495f-a540-800c41d881a7]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.028 160839 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.028 160839 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.028 160839 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:42:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v377: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.527 160839 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f6e6f3-d8c1-45d4-ad44-2d1c0e0bd92d]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.530 160725 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=2ba84045-48af-49e3-86f7-35b32300977f, column=external_ids, values=({'neutron:ovn-metadata-id': 'e7c6ed73-f602-5301-8e5d-9f4191b6b114'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.699 160725 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ba84045-48af-49e3-86f7-35b32300977f, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.746 160725 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.746 160725 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.747 160725 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.747 160725 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.747 160725 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.747 160725 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.748 160725 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.748 160725 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.748 160725 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.748 160725 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.749 160725 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.749 160725 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.749 160725 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.749 160725 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.749 160725 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.750 160725 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.750 160725 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.750 160725 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.751 160725 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.751 160725 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.751 160725 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.751 160725 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.752 160725 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.752 160725 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.752 160725 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.752 160725 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.753 160725 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.753 160725 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.753 160725 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.754 160725 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.754 160725 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.754 160725 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.754 160725 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.755 160725 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.755 160725 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.755 160725 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.755 160725 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.756 160725 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.756 160725 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.756 160725 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.757 160725 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.757 160725 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.757 160725 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.757 160725 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.758 160725 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.758 160725 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.758 160725 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.758 160725 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.759 160725 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.759 160725 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.759 160725 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.759 160725 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.760 160725 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.760 160725 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.760 160725 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.760 160725 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.761 160725 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.761 160725 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.761 160725 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.761 160725 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.761 160725 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.762 160725 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.762 160725 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.762 160725 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.762 160725 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.763 160725 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.763 160725 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.763 160725 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.763 160725 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.764 160725 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.764 160725 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.764 160725 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.764 160725 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.765 160725 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.765 160725 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.765 160725 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.765 160725 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.765 160725 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.766 160725 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.766 160725 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.766 160725 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.766 160725 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.767 160725 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.767 160725 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.767 160725 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.767 160725 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.768 160725 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.768 160725 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.768 160725 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.768 160725 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.769 160725 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.769 160725 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.769 160725 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.769 160725 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.769 160725 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.770 160725 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.770 160725 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.770 160725 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.770 160725 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.770 160725 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.771 160725 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.771 160725 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.771 160725 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.771 160725 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.771 160725 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.772 160725 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.772 160725 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.772 160725 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.772 160725 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.773 160725 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.773 160725 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.773 160725 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.774 160725 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.774 160725 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.774 160725 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.774 160725 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.774 160725 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.775 160725 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.775 160725 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.775 160725 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.775 160725 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.776 160725 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.776 160725 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.776 160725 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.776 160725 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.777 160725 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.777 160725 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.777 160725 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.778 160725 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.778 160725 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.778 160725 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.778 160725 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.778 160725 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.779 160725 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.779 160725 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.779 160725 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.779 160725 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.780 160725 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.780 160725 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.780 160725 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.781 160725 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.781 160725 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.781 160725 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.781 160725 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.782 160725 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.782 160725 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.782 160725 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.782 160725 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.782 160725 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.783 160725 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.783 160725 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.783 160725 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.783 160725 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.784 160725 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.784 160725 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.784 160725 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.784 160725 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.784 160725 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.784 160725 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.785 160725 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.785 160725 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.785 160725 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.785 160725 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.786 160725 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.786 160725 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.786 160725 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.786 160725 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.786 160725 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.787 160725 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.787 160725 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.787 160725 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.787 160725 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.787 160725 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.788 160725 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.788 160725 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.788 160725 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.788 160725 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.789 160725 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.789 160725 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.789 160725 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.789 160725 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.789 160725 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.790 160725 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.790 160725 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.790 160725 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.790 160725 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.791 160725 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.791 160725 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.791 160725 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.791 160725 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.792 160725 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.792 160725 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.792 160725 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.792 160725 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.792 160725 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.793 160725 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.793 160725 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.793 160725 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.793 160725 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.794 160725 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.794 160725 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.794 160725 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.794 160725 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.795 160725 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.795 160725 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.795 160725 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.795 160725 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.796 160725 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.796 160725 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.796 160725 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.796 160725 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.796 160725 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.797 160725 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.797 160725 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.797 160725 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.797 160725 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.798 160725 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.798 160725 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.798 160725 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.798 160725 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.798 160725 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.798 160725 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.799 160725 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.799 160725 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.799 160725 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.799 160725 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.799 160725 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.800 160725 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.800 160725 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.800 160725 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.800 160725 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.800 160725 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.800 160725 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.800 160725 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.801 160725 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.801 160725 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.801 160725 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.801 160725 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.801 160725 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.801 160725 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.801 160725 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.802 160725 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.802 160725 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.802 160725 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.802 160725 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.802 160725 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.802 160725 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.802 160725 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.803 160725 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.803 160725 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.803 160725 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.803 160725 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.803 160725 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.803 160725 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.804 160725 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.804 160725 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.804 160725 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.804 160725 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.804 160725 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.804 160725 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.804 160725 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.805 160725 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.805 160725 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.805 160725 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.805 160725 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.805 160725 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.806 160725 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.806 160725 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.807 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.807 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.807 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.807 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.808 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.808 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.808 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.808 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.809 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.809 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.809 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.809 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.810 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.810 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.810 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.810 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.810 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.810 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.811 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.811 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.811 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.811 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.811 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.811 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.811 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.812 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.812 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.812 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.812 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.812 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.812 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.812 160725 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.813 160725 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.813 160725 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.813 160725 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:42:42 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.813 160725 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 18:42:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v378: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:42:45 np0005535838 systemd-logind[789]: New session 49 of user zuul.
Nov 25 18:42:45 np0005535838 systemd[1]: Started Session 49 of User zuul.
Nov 25 18:42:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v379: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:47 np0005535838 python3.9[160999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:42:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v380: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:48 np0005535838 python3.9[161155]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:42:49 np0005535838 python3.9[161320]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 18:42:49 np0005535838 systemd[1]: Reloading.
Nov 25 18:42:50 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:42:50 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:42:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v381: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:42:51 np0005535838 python3.9[161505]: ansible-ansible.builtin.service_facts Invoked
Nov 25 18:42:51 np0005535838 network[161522]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 18:42:51 np0005535838 network[161523]: 'network-scripts' will be removed from distribution in near future.
Nov 25 18:42:51 np0005535838 network[161524]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 18:42:51 np0005535838 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:42:51 np0005535838 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 4366 writes, 20K keys, 4366 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4366 writes, 458 syncs, 9.53 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4366 writes, 20K keys, 4366 commit groups, 1.0 writes per commit group, ingest: 16.39 MB, 0.03 MB/s#012Interval WAL: 4366 writes, 458 syncs, 9.53 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Nov 25 18:42:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v382: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:53 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:42:53 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:42:53 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:42:53 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:42:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v383: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:42:54 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev a5763175-0e10-4177-9cec-90ff04bfda29 does not exist
Nov 25 18:42:54 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 74f39a0b-e517-41be-b053-b6965b223d44 does not exist
Nov 25 18:42:54 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev c0815e81-357e-43c7-909e-6b42d2efb6e4 does not exist
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:42:54 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:42:55 np0005535838 podman[162011]: 2025-11-25 23:42:55.189028136 +0000 UTC m=+0.059246681 container create 4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nash, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 18:42:55 np0005535838 systemd[1]: Started libpod-conmon-4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6.scope.
Nov 25 18:42:55 np0005535838 podman[162011]: 2025-11-25 23:42:55.161211075 +0000 UTC m=+0.031429720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:42:55 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:42:55 np0005535838 podman[162011]: 2025-11-25 23:42:55.290947522 +0000 UTC m=+0.161166097 container init 4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 18:42:55 np0005535838 podman[162011]: 2025-11-25 23:42:55.301203976 +0000 UTC m=+0.171422531 container start 4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nash, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:42:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:42:55 np0005535838 podman[162011]: 2025-11-25 23:42:55.304880794 +0000 UTC m=+0.175099429 container attach 4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:42:55 np0005535838 cranky_nash[162027]: 167 167
Nov 25 18:42:55 np0005535838 systemd[1]: libpod-4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6.scope: Deactivated successfully.
Nov 25 18:42:55 np0005535838 podman[162011]: 2025-11-25 23:42:55.309343052 +0000 UTC m=+0.179561597 container died 4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:42:55 np0005535838 systemd[1]: var-lib-containers-storage-overlay-8511940e529114739810384d7ebc24944ba7cca3e07053ae0c879b24c276f9b4-merged.mount: Deactivated successfully.
Nov 25 18:42:55 np0005535838 podman[162011]: 2025-11-25 23:42:55.351509687 +0000 UTC m=+0.221728272 container remove 4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nash, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 18:42:55 np0005535838 systemd[1]: libpod-conmon-4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6.scope: Deactivated successfully.
Nov 25 18:42:55 np0005535838 podman[162053]: 2025-11-25 23:42:55.593055855 +0000 UTC m=+0.060064542 container create 671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 18:42:55 np0005535838 systemd[1]: Started libpod-conmon-671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64.scope.
Nov 25 18:42:55 np0005535838 podman[162053]: 2025-11-25 23:42:55.562039309 +0000 UTC m=+0.029048076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:42:55 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:42:55 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a631ab8a28f611bdbb8177e7352b16f49fa967489f6c3454ab618bd370432a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:42:55 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a631ab8a28f611bdbb8177e7352b16f49fa967489f6c3454ab618bd370432a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:42:55 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a631ab8a28f611bdbb8177e7352b16f49fa967489f6c3454ab618bd370432a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:42:55 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a631ab8a28f611bdbb8177e7352b16f49fa967489f6c3454ab618bd370432a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:42:55 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a631ab8a28f611bdbb8177e7352b16f49fa967489f6c3454ab618bd370432a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:42:55 np0005535838 podman[162053]: 2025-11-25 23:42:55.713106365 +0000 UTC m=+0.180115102 container init 671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 25 18:42:55 np0005535838 podman[162053]: 2025-11-25 23:42:55.733496718 +0000 UTC m=+0.200505435 container start 671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 18:42:55 np0005535838 podman[162053]: 2025-11-25 23:42:55.737628408 +0000 UTC m=+0.204637125 container attach 671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 18:42:55 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:42:55 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 16.54 MB, 0.03 MB/s#012Interval WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:42:56
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'backups', 'images', 'volumes', '.mgr']
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:42:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v384: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:56 np0005535838 magical_neumann[162074]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:42:56 np0005535838 magical_neumann[162074]: --> relative data size: 1.0
Nov 25 18:42:56 np0005535838 magical_neumann[162074]: --> All data devices are unavailable
Nov 25 18:42:56 np0005535838 systemd[1]: libpod-671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64.scope: Deactivated successfully.
Nov 25 18:42:56 np0005535838 systemd[1]: libpod-671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64.scope: Consumed 1.056s CPU time.
Nov 25 18:42:56 np0005535838 podman[162053]: 2025-11-25 23:42:56.851739726 +0000 UTC m=+1.318748433 container died 671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:42:56 np0005535838 systemd[1]: var-lib-containers-storage-overlay-07a631ab8a28f611bdbb8177e7352b16f49fa967489f6c3454ab618bd370432a-merged.mount: Deactivated successfully.
Nov 25 18:42:56 np0005535838 podman[162053]: 2025-11-25 23:42:56.930618258 +0000 UTC m=+1.397626945 container remove 671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:42:56 np0005535838 systemd[1]: libpod-conmon-671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64.scope: Deactivated successfully.
Nov 25 18:42:57 np0005535838 python3.9[162260]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:42:57 np0005535838 podman[162544]: 2025-11-25 23:42:57.677115007 +0000 UTC m=+0.077182779 container create 66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 18:42:57 np0005535838 systemd[1]: Started libpod-conmon-66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996.scope.
Nov 25 18:42:57 np0005535838 podman[162544]: 2025-11-25 23:42:57.64572303 +0000 UTC m=+0.045790892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:42:57 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:42:57 np0005535838 podman[162544]: 2025-11-25 23:42:57.776759153 +0000 UTC m=+0.176826945 container init 66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:42:57 np0005535838 podman[162544]: 2025-11-25 23:42:57.784656963 +0000 UTC m=+0.184724755 container start 66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:42:57 np0005535838 podman[162544]: 2025-11-25 23:42:57.788612608 +0000 UTC m=+0.188680380 container attach 66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:42:57 np0005535838 clever_hodgkin[162586]: 167 167
Nov 25 18:42:57 np0005535838 systemd[1]: libpod-66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996.scope: Deactivated successfully.
Nov 25 18:42:57 np0005535838 podman[162544]: 2025-11-25 23:42:57.792909053 +0000 UTC m=+0.192976915 container died 66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 18:42:57 np0005535838 systemd[1]: var-lib-containers-storage-overlay-8be8344db39848e910f31907dce76d96bb9b9795ee2d4d3db2e1868a50c51e47-merged.mount: Deactivated successfully.
Nov 25 18:42:57 np0005535838 podman[162544]: 2025-11-25 23:42:57.842489625 +0000 UTC m=+0.242557407 container remove 66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:42:57 np0005535838 systemd[1]: libpod-conmon-66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996.scope: Deactivated successfully.
Nov 25 18:42:57 np0005535838 python3.9[162583]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:42:58 np0005535838 podman[162612]: 2025-11-25 23:42:58.058875042 +0000 UTC m=+0.053639770 container create f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:42:58 np0005535838 systemd[1]: Started libpod-conmon-f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466.scope.
Nov 25 18:42:58 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:42:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4741fe2e3ae45e7e8e9a5c27c66dffd861908a5e9e33dbcff28625ec81af1ed4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:42:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4741fe2e3ae45e7e8e9a5c27c66dffd861908a5e9e33dbcff28625ec81af1ed4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:42:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4741fe2e3ae45e7e8e9a5c27c66dffd861908a5e9e33dbcff28625ec81af1ed4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:42:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4741fe2e3ae45e7e8e9a5c27c66dffd861908a5e9e33dbcff28625ec81af1ed4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:42:58 np0005535838 podman[162612]: 2025-11-25 23:42:58.036018403 +0000 UTC m=+0.030783211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:42:58 np0005535838 podman[162612]: 2025-11-25 23:42:58.134758125 +0000 UTC m=+0.129522873 container init f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 18:42:58 np0005535838 podman[162612]: 2025-11-25 23:42:58.149352945 +0000 UTC m=+0.144117713 container start f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:42:58 np0005535838 podman[162612]: 2025-11-25 23:42:58.156507935 +0000 UTC m=+0.151272683 container attach f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:42:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v385: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:42:58 np0005535838 python3.9[162785]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]: {
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:    "0": [
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:        {
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "devices": [
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "/dev/loop3"
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            ],
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_name": "ceph_lv0",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_size": "21470642176",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "name": "ceph_lv0",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "tags": {
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.cluster_name": "ceph",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.crush_device_class": "",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.encrypted": "0",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.osd_id": "0",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.type": "block",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.vdo": "0"
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            },
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "type": "block",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "vg_name": "ceph_vg0"
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:        }
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:    ],
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:    "1": [
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:        {
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "devices": [
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "/dev/loop4"
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            ],
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_name": "ceph_lv1",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_size": "21470642176",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "name": "ceph_lv1",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "tags": {
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.cluster_name": "ceph",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.crush_device_class": "",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.encrypted": "0",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.osd_id": "1",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.type": "block",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.vdo": "0"
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            },
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "type": "block",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "vg_name": "ceph_vg1"
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:        }
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:    ],
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:    "2": [
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:        {
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "devices": [
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "/dev/loop5"
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            ],
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_name": "ceph_lv2",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_size": "21470642176",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "name": "ceph_lv2",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "tags": {
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.cluster_name": "ceph",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.crush_device_class": "",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.encrypted": "0",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.osd_id": "2",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.type": "block",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:                "ceph.vdo": "0"
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            },
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "type": "block",
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:            "vg_name": "ceph_vg2"
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:        }
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]:    ]
Nov 25 18:42:58 np0005535838 reverent_cohen[162652]: }
Nov 25 18:42:58 np0005535838 systemd[1]: libpod-f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466.scope: Deactivated successfully.
Nov 25 18:42:58 np0005535838 podman[162612]: 2025-11-25 23:42:58.931649818 +0000 UTC m=+0.926414556 container died f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 25 18:42:58 np0005535838 systemd[1]: var-lib-containers-storage-overlay-4741fe2e3ae45e7e8e9a5c27c66dffd861908a5e9e33dbcff28625ec81af1ed4-merged.mount: Deactivated successfully.
Nov 25 18:42:58 np0005535838 podman[162612]: 2025-11-25 23:42:58.98880402 +0000 UTC m=+0.983568758 container remove f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:42:59 np0005535838 systemd[1]: libpod-conmon-f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466.scope: Deactivated successfully.
Nov 25 18:42:59 np0005535838 podman[163095]: 2025-11-25 23:42:59.65616266 +0000 UTC m=+0.054356010 container create cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_volhard, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:42:59 np0005535838 systemd[1]: Started libpod-conmon-cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb.scope.
Nov 25 18:42:59 np0005535838 python3.9[163054]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:42:59 np0005535838 podman[163095]: 2025-11-25 23:42:59.627610589 +0000 UTC m=+0.025803999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:42:59 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:42:59 np0005535838 podman[163095]: 2025-11-25 23:42:59.739961474 +0000 UTC m=+0.138154834 container init cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_volhard, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:42:59 np0005535838 podman[163095]: 2025-11-25 23:42:59.746914829 +0000 UTC m=+0.145108149 container start cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_volhard, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:42:59 np0005535838 ecstatic_volhard[163112]: 167 167
Nov 25 18:42:59 np0005535838 systemd[1]: libpod-cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb.scope: Deactivated successfully.
Nov 25 18:42:59 np0005535838 podman[163095]: 2025-11-25 23:42:59.753281848 +0000 UTC m=+0.151475168 container attach cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_volhard, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:42:59 np0005535838 podman[163095]: 2025-11-25 23:42:59.753732261 +0000 UTC m=+0.151925611 container died cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_volhard, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 18:42:59 np0005535838 systemd[1]: var-lib-containers-storage-overlay-84c9189166d0620fd0df381c12a47a1dd669e203940ea309cb9dc39ed5d73dfd-merged.mount: Deactivated successfully.
Nov 25 18:42:59 np0005535838 podman[163095]: 2025-11-25 23:42:59.797914508 +0000 UTC m=+0.196107858 container remove cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:42:59 np0005535838 systemd[1]: libpod-conmon-cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb.scope: Deactivated successfully.
Nov 25 18:42:59 np0005535838 podman[163185]: 2025-11-25 23:42:59.967318424 +0000 UTC m=+0.034881781 container create a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 18:43:00 np0005535838 systemd[1]: Started libpod-conmon-a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12.scope.
Nov 25 18:43:00 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:43:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71035ee13088e866fd522fc49f876771cd19dd8d8422b39d5b583428d80f2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:43:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71035ee13088e866fd522fc49f876771cd19dd8d8422b39d5b583428d80f2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:43:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71035ee13088e866fd522fc49f876771cd19dd8d8422b39d5b583428d80f2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:43:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71035ee13088e866fd522fc49f876771cd19dd8d8422b39d5b583428d80f2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:43:00 np0005535838 podman[163185]: 2025-11-25 23:42:59.952786987 +0000 UTC m=+0.020350374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:43:00 np0005535838 podman[163185]: 2025-11-25 23:43:00.056147622 +0000 UTC m=+0.123711069 container init a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 18:43:00 np0005535838 podman[163185]: 2025-11-25 23:43:00.071147861 +0000 UTC m=+0.138711218 container start a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:43:00 np0005535838 podman[163185]: 2025-11-25 23:43:00.073787552 +0000 UTC m=+0.141350999 container attach a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:43:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v386: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:43:00 np0005535838 python3.9[163310]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]: {
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "osd_id": 2,
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "type": "bluestore"
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:    },
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "osd_id": 1,
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "type": "bluestore"
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:    },
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "osd_id": 0,
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:        "type": "bluestore"
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]:    }
Nov 25 18:43:01 np0005535838 blissful_kapitsa[163230]: }
Nov 25 18:43:01 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:43:01 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 16.16 MB, 0.03 MB/s#012Interval WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 25 18:43:01 np0005535838 systemd[1]: libpod-a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12.scope: Deactivated successfully.
Nov 25 18:43:01 np0005535838 podman[163185]: 2025-11-25 23:43:01.109684385 +0000 UTC m=+1.177247742 container died a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:43:01 np0005535838 systemd[1]: libpod-a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12.scope: Consumed 1.044s CPU time.
Nov 25 18:43:01 np0005535838 systemd[1]: var-lib-containers-storage-overlay-1e71035ee13088e866fd522fc49f876771cd19dd8d8422b39d5b583428d80f2f-merged.mount: Deactivated successfully.
Nov 25 18:43:01 np0005535838 podman[163185]: 2025-11-25 23:43:01.161845125 +0000 UTC m=+1.229408482 container remove a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 18:43:01 np0005535838 systemd[1]: libpod-conmon-a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12.scope: Deactivated successfully.
Nov 25 18:43:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:43:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:43:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:43:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 7dbd1a74-3536-4a54-bf19-dca989a92876 does not exist
Nov 25 18:43:01 np0005535838 podman[163442]: 2025-11-25 23:43:01.234373328 +0000 UTC m=+0.099452632 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 18:43:01 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:43:01 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:43:01 np0005535838 python3.9[163529]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [devicehealth INFO root] Check health
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:43:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:43:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v387: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:02 np0005535838 python3.9[163732]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:43:03 np0005535838 python3.9[163885]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:43:04 np0005535838 python3.9[164037]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:43:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v388: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:43:05 np0005535838 python3.9[164195]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:43:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:06 np0005535838 python3.9[164349]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:43:07 np0005535838 python3.9[164501]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:43:08 np0005535838 python3.9[164653]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:43:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v390: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:08 np0005535838 python3.9[164805]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:43:09 np0005535838 podman[164957]: 2025-11-25 23:43:09.428673706 +0000 UTC m=+0.085587023 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:43:09 np0005535838 python3.9[164958]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:43:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v391: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:10 np0005535838 python3.9[165128]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:43:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:43:11 np0005535838 python3.9[165280]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:43:11 np0005535838 python3.9[165432]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:43:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v392: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:12 np0005535838 python3.9[165584]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:43:13 np0005535838 python3.9[165736]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:43:14 np0005535838 python3.9[165888]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:43:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v393: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:15 np0005535838 python3.9[166040]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:43:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:43:16 np0005535838 python3.9[166192]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 18:43:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v394: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:17 np0005535838 python3.9[166344]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 18:43:17 np0005535838 systemd[1]: Reloading.
Nov 25 18:43:17 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:43:17 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:43:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v395: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:18 np0005535838 python3.9[166531]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:43:19 np0005535838 python3.9[166684]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:43:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v396: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:43:20 np0005535838 python3.9[166837]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:43:21 np0005535838 python3.9[166990]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:43:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v397: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:23 np0005535838 python3.9[167143]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.486416) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114203486463, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1446, "num_deletes": 251, "total_data_size": 1573542, "memory_usage": 1606592, "flush_reason": "Manual Compaction"}
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114203504604, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1532904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7623, "largest_seqno": 9068, "table_properties": {"data_size": 1526203, "index_size": 3840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13231, "raw_average_key_size": 18, "raw_value_size": 1512812, "raw_average_value_size": 2170, "num_data_blocks": 180, "num_entries": 697, "num_filter_entries": 697, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764114046, "oldest_key_time": 1764114046, "file_creation_time": 1764114203, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 18275 microseconds, and 9432 cpu microseconds.
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.504685) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1532904 bytes OK
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.504715) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.507133) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.507159) EVENT_LOG_v1 {"time_micros": 1764114203507151, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.507210) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1567201, prev total WAL file size 1567201, number of live WAL files 2.
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.508269) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1496KB)], [23(4357KB)]
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114203508338, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 5994564, "oldest_snapshot_seqno": -1}
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 2810 keys, 4719855 bytes, temperature: kUnknown
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114203546824, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 4719855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4698610, "index_size": 13136, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7045, "raw_key_size": 65139, "raw_average_key_size": 23, "raw_value_size": 4645755, "raw_average_value_size": 1653, "num_data_blocks": 587, "num_entries": 2810, "num_filter_entries": 2810, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764114203, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.547092) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 4719855 bytes
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.548813) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.5 rd, 122.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 4.3 +0.0 blob) out(4.5 +0.0 blob), read-write-amplify(7.0) write-amplify(3.1) OK, records in: 3324, records dropped: 514 output_compression: NoCompression
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.548842) EVENT_LOG_v1 {"time_micros": 1764114203548828, "job": 8, "event": "compaction_finished", "compaction_time_micros": 38552, "compaction_time_cpu_micros": 22768, "output_level": 6, "num_output_files": 1, "total_output_size": 4719855, "num_input_records": 3324, "num_output_records": 2810, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114203549569, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114203550855, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.508099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.550921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.550929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.550933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.550936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:43:23 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.550938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:43:24 np0005535838 python3.9[167296]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:43:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v398: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:24 np0005535838 python3.9[167449]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:43:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:43:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:43:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:43:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:43:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:43:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:43:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:43:26 np0005535838 python3.9[167602]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 25 18:43:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v399: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:27 np0005535838 python3.9[167755]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 18:43:28 np0005535838 python3.9[167913]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 18:43:28 np0005535838 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 18:43:28 np0005535838 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 18:43:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v400: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:29 np0005535838 python3.9[168074]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:43:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v401: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:43:30 np0005535838 python3.9[168158]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:43:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v402: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:32 np0005535838 podman[168163]: 2025-11-25 23:43:32.374805909 +0000 UTC m=+0.193956514 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 18:43:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v403: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:43:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v404: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:40 np0005535838 podman[168275]: 2025-11-25 23:43:40.262022484 +0000 UTC m=+0.086081892 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:43:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v406: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:43:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:43:40.750 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:43:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:43:40.751 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:43:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:43:40.751 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:43:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v407: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:43:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v409: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v411: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:43:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v412: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v413: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:43:56
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'volumes', 'backups', 'vms', 'cephfs.cephfs.data', '.mgr']
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:43:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v414: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v415: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:43:58 np0005535838 kernel: SELinux:  Converting 2768 SID table entries...
Nov 25 18:43:58 np0005535838 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 18:43:58 np0005535838 kernel: SELinux:  policy capability open_perms=1
Nov 25 18:43:58 np0005535838 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 18:43:58 np0005535838 kernel: SELinux:  policy capability always_check_network=0
Nov 25 18:43:58 np0005535838 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 18:43:58 np0005535838 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 18:43:58 np0005535838 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 18:44:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v416: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:44:01 np0005535838 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:44:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:44:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v417: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:44:02 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 010fbaab-a0ce-46e2-98ce-8e5d9c50d789 does not exist
Nov 25 18:44:02 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev eff805bd-9371-4266-8138-f0ec07e62935 does not exist
Nov 25 18:44:02 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 7da94ce2-822c-4721-900e-3f4d1045c44e does not exist
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:44:02 np0005535838 podman[168565]: 2025-11-25 23:44:02.601217402 +0000 UTC m=+0.174404014 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:44:02 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:44:03 np0005535838 podman[168706]: 2025-11-25 23:44:03.031867478 +0000 UTC m=+0.055164750 container create e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhabha, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 18:44:03 np0005535838 systemd[1]: Started libpod-conmon-e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b.scope.
Nov 25 18:44:03 np0005535838 podman[168706]: 2025-11-25 23:44:03.000598445 +0000 UTC m=+0.023895737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:44:03 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:44:03 np0005535838 podman[168706]: 2025-11-25 23:44:03.121156254 +0000 UTC m=+0.144453576 container init e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 18:44:03 np0005535838 podman[168706]: 2025-11-25 23:44:03.126820146 +0000 UTC m=+0.150117428 container start e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 18:44:03 np0005535838 elated_bhabha[168722]: 167 167
Nov 25 18:44:03 np0005535838 podman[168706]: 2025-11-25 23:44:03.130529084 +0000 UTC m=+0.153826356 container attach e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 18:44:03 np0005535838 systemd[1]: libpod-e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b.scope: Deactivated successfully.
Nov 25 18:44:03 np0005535838 podman[168706]: 2025-11-25 23:44:03.131256024 +0000 UTC m=+0.154553306 container died e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhabha, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:44:03 np0005535838 systemd[1]: var-lib-containers-storage-overlay-7c4f98b17af44dac69a3c0d73f00b01e3703f55fdf858cfaf58d520604444410-merged.mount: Deactivated successfully.
Nov 25 18:44:03 np0005535838 podman[168706]: 2025-11-25 23:44:03.188090217 +0000 UTC m=+0.211387499 container remove e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhabha, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:44:03 np0005535838 systemd[1]: libpod-conmon-e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b.scope: Deactivated successfully.
Nov 25 18:44:03 np0005535838 podman[168748]: 2025-11-25 23:44:03.364449032 +0000 UTC m=+0.044550217 container create 4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 25 18:44:03 np0005535838 systemd[1]: Started libpod-conmon-4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f.scope.
Nov 25 18:44:03 np0005535838 podman[168748]: 2025-11-25 23:44:03.344098871 +0000 UTC m=+0.024200146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:44:03 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:44:03 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0bcb74f0b859c8a9117708bb79854d6392929b32c1f4309ee1dc5f9104abfa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:44:03 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0bcb74f0b859c8a9117708bb79854d6392929b32c1f4309ee1dc5f9104abfa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:44:03 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0bcb74f0b859c8a9117708bb79854d6392929b32c1f4309ee1dc5f9104abfa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:44:03 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0bcb74f0b859c8a9117708bb79854d6392929b32c1f4309ee1dc5f9104abfa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:44:03 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0bcb74f0b859c8a9117708bb79854d6392929b32c1f4309ee1dc5f9104abfa9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:44:03 np0005535838 podman[168748]: 2025-11-25 23:44:03.476828245 +0000 UTC m=+0.156929510 container init 4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:44:03 np0005535838 podman[168748]: 2025-11-25 23:44:03.493832057 +0000 UTC m=+0.173933282 container start 4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 18:44:03 np0005535838 podman[168748]: 2025-11-25 23:44:03.498778718 +0000 UTC m=+0.178879983 container attach 4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 18:44:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v418: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:04 np0005535838 nifty_golick[168764]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:44:04 np0005535838 nifty_golick[168764]: --> relative data size: 1.0
Nov 25 18:44:04 np0005535838 nifty_golick[168764]: --> All data devices are unavailable
Nov 25 18:44:04 np0005535838 systemd[1]: libpod-4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f.scope: Deactivated successfully.
Nov 25 18:44:04 np0005535838 podman[168748]: 2025-11-25 23:44:04.580437736 +0000 UTC m=+1.260538931 container died 4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 18:44:04 np0005535838 systemd[1]: libpod-4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f.scope: Consumed 1.037s CPU time.
Nov 25 18:44:04 np0005535838 systemd[1]: var-lib-containers-storage-overlay-a0bcb74f0b859c8a9117708bb79854d6392929b32c1f4309ee1dc5f9104abfa9-merged.mount: Deactivated successfully.
Nov 25 18:44:04 np0005535838 podman[168748]: 2025-11-25 23:44:04.632086691 +0000 UTC m=+1.312187876 container remove 4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:44:04 np0005535838 systemd[1]: libpod-conmon-4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f.scope: Deactivated successfully.
Nov 25 18:44:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:44:05 np0005535838 podman[168945]: 2025-11-25 23:44:05.461097963 +0000 UTC m=+0.072957594 container create 83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:44:05 np0005535838 systemd[1]: Started libpod-conmon-83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7.scope.
Nov 25 18:44:05 np0005535838 podman[168945]: 2025-11-25 23:44:05.43136898 +0000 UTC m=+0.043228621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:44:05 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:44:05 np0005535838 podman[168945]: 2025-11-25 23:44:05.556848591 +0000 UTC m=+0.168708212 container init 83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 18:44:05 np0005535838 podman[168945]: 2025-11-25 23:44:05.563734025 +0000 UTC m=+0.175593626 container start 83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:44:05 np0005535838 podman[168945]: 2025-11-25 23:44:05.567531246 +0000 UTC m=+0.179390847 container attach 83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 18:44:05 np0005535838 systemd[1]: libpod-83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7.scope: Deactivated successfully.
Nov 25 18:44:05 np0005535838 inspiring_montalcini[168962]: 167 167
Nov 25 18:44:05 np0005535838 conmon[168962]: conmon 83e3c82622b4c810753d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7.scope/container/memory.events
Nov 25 18:44:05 np0005535838 podman[168945]: 2025-11-25 23:44:05.571968154 +0000 UTC m=+0.183827775 container died 83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 18:44:05 np0005535838 systemd[1]: var-lib-containers-storage-overlay-447477ea12e713bbae6911944c10495c60fdd2fd09f2beeea54e0159344a90ea-merged.mount: Deactivated successfully.
Nov 25 18:44:05 np0005535838 podman[168945]: 2025-11-25 23:44:05.610605082 +0000 UTC m=+0.222464693 container remove 83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 25 18:44:05 np0005535838 systemd[1]: libpod-conmon-83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7.scope: Deactivated successfully.
Nov 25 18:44:05 np0005535838 podman[168986]: 2025-11-25 23:44:05.831069082 +0000 UTC m=+0.055456048 container create 1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:44:05 np0005535838 systemd[1]: Started libpod-conmon-1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2.scope.
Nov 25 18:44:05 np0005535838 podman[168986]: 2025-11-25 23:44:05.812889048 +0000 UTC m=+0.037276094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:44:05 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:44:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687b98ffce40f2aa4370f1c9122683e7b5de4d1ed3786d32f69785026d10f73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:44:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687b98ffce40f2aa4370f1c9122683e7b5de4d1ed3786d32f69785026d10f73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:44:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687b98ffce40f2aa4370f1c9122683e7b5de4d1ed3786d32f69785026d10f73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:44:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687b98ffce40f2aa4370f1c9122683e7b5de4d1ed3786d32f69785026d10f73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:44:05 np0005535838 podman[168986]: 2025-11-25 23:44:05.947305247 +0000 UTC m=+0.171692293 container init 1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:44:05 np0005535838 podman[168986]: 2025-11-25 23:44:05.961749011 +0000 UTC m=+0.186136017 container start 1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:44:05 np0005535838 podman[168986]: 2025-11-25 23:44:05.965858971 +0000 UTC m=+0.190245977 container attach 1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 18:44:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v419: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]: {
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:    "0": [
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:        {
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "devices": [
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "/dev/loop3"
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            ],
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_name": "ceph_lv0",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_size": "21470642176",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "name": "ceph_lv0",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "tags": {
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.cluster_name": "ceph",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.crush_device_class": "",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.encrypted": "0",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.osd_id": "0",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.type": "block",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.vdo": "0"
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            },
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "type": "block",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "vg_name": "ceph_vg0"
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:        }
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:    ],
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:    "1": [
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:        {
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "devices": [
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "/dev/loop4"
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            ],
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_name": "ceph_lv1",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_size": "21470642176",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "name": "ceph_lv1",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "tags": {
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.cluster_name": "ceph",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.crush_device_class": "",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.encrypted": "0",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.osd_id": "1",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.type": "block",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.vdo": "0"
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            },
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "type": "block",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "vg_name": "ceph_vg1"
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:        }
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:    ],
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:    "2": [
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:        {
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "devices": [
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "/dev/loop5"
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            ],
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_name": "ceph_lv2",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_size": "21470642176",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "name": "ceph_lv2",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "tags": {
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.cluster_name": "ceph",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.crush_device_class": "",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.encrypted": "0",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.osd_id": "2",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.type": "block",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:                "ceph.vdo": "0"
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            },
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "type": "block",
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:            "vg_name": "ceph_vg2"
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:        }
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]:    ]
Nov 25 18:44:06 np0005535838 elegant_faraday[169003]: }
Nov 25 18:44:06 np0005535838 systemd[1]: libpod-1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2.scope: Deactivated successfully.
Nov 25 18:44:06 np0005535838 podman[168986]: 2025-11-25 23:44:06.70240767 +0000 UTC m=+0.926794666 container died 1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:44:06 np0005535838 systemd[1]: var-lib-containers-storage-overlay-f687b98ffce40f2aa4370f1c9122683e7b5de4d1ed3786d32f69785026d10f73-merged.mount: Deactivated successfully.
Nov 25 18:44:06 np0005535838 podman[168986]: 2025-11-25 23:44:06.787053774 +0000 UTC m=+1.011440750 container remove 1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 18:44:06 np0005535838 systemd[1]: libpod-conmon-1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2.scope: Deactivated successfully.
Nov 25 18:44:07 np0005535838 podman[169166]: 2025-11-25 23:44:07.571994581 +0000 UTC m=+0.065157075 container create 61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 18:44:07 np0005535838 systemd[1]: Started libpod-conmon-61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8.scope.
Nov 25 18:44:07 np0005535838 podman[169166]: 2025-11-25 23:44:07.52758973 +0000 UTC m=+0.020752264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:44:07 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:44:07 np0005535838 podman[169166]: 2025-11-25 23:44:07.732125585 +0000 UTC m=+0.225288109 container init 61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:44:07 np0005535838 podman[169166]: 2025-11-25 23:44:07.74772631 +0000 UTC m=+0.240888804 container start 61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 18:44:07 np0005535838 exciting_nobel[169183]: 167 167
Nov 25 18:44:07 np0005535838 systemd[1]: libpod-61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8.scope: Deactivated successfully.
Nov 25 18:44:07 np0005535838 podman[169166]: 2025-11-25 23:44:07.803840024 +0000 UTC m=+0.297002548 container attach 61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:44:07 np0005535838 podman[169166]: 2025-11-25 23:44:07.804539082 +0000 UTC m=+0.297701606 container died 61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 18:44:07 np0005535838 systemd[1]: var-lib-containers-storage-overlay-31544bb57c45cbba3f49f53f92f62c8634946eaad3582210ac2b243d414df45b-merged.mount: Deactivated successfully.
Nov 25 18:44:07 np0005535838 podman[169166]: 2025-11-25 23:44:07.881331687 +0000 UTC m=+0.374494211 container remove 61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 18:44:07 np0005535838 systemd[1]: libpod-conmon-61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8.scope: Deactivated successfully.
Nov 25 18:44:08 np0005535838 kernel: SELinux:  Converting 2768 SID table entries...
Nov 25 18:44:08 np0005535838 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 18:44:08 np0005535838 kernel: SELinux:  policy capability open_perms=1
Nov 25 18:44:08 np0005535838 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 18:44:08 np0005535838 kernel: SELinux:  policy capability always_check_network=0
Nov 25 18:44:08 np0005535838 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 18:44:08 np0005535838 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 18:44:08 np0005535838 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 18:44:08 np0005535838 podman[169213]: 2025-11-25 23:44:08.096355142 +0000 UTC m=+0.062343631 container create bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 18:44:08 np0005535838 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 25 18:44:08 np0005535838 systemd[1]: Started libpod-conmon-bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35.scope.
Nov 25 18:44:08 np0005535838 podman[169213]: 2025-11-25 23:44:08.062236733 +0000 UTC m=+0.028225252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:44:08 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:44:08 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bbc8791309010021c61c99298ac6aa0f5aeb6f7433b58a00004f325a03e7a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:44:08 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bbc8791309010021c61c99298ac6aa0f5aeb6f7433b58a00004f325a03e7a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:44:08 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bbc8791309010021c61c99298ac6aa0f5aeb6f7433b58a00004f325a03e7a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:44:08 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bbc8791309010021c61c99298ac6aa0f5aeb6f7433b58a00004f325a03e7a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:44:08 np0005535838 podman[169213]: 2025-11-25 23:44:08.1830545 +0000 UTC m=+0.149043029 container init bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:44:08 np0005535838 podman[169213]: 2025-11-25 23:44:08.196637121 +0000 UTC m=+0.162625650 container start bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:44:08 np0005535838 podman[169213]: 2025-11-25 23:44:08.200858434 +0000 UTC m=+0.166846963 container attach bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 18:44:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v420: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]: {
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "osd_id": 2,
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "type": "bluestore"
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:    },
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "osd_id": 1,
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "type": "bluestore"
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:    },
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "osd_id": 0,
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:        "type": "bluestore"
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]:    }
Nov 25 18:44:09 np0005535838 gallant_rubin[169230]: }
Nov 25 18:44:09 np0005535838 systemd[1]: libpod-bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35.scope: Deactivated successfully.
Nov 25 18:44:09 np0005535838 podman[169213]: 2025-11-25 23:44:09.132855587 +0000 UTC m=+1.098844096 container died bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:44:09 np0005535838 systemd[1]: var-lib-containers-storage-overlay-00bbc8791309010021c61c99298ac6aa0f5aeb6f7433b58a00004f325a03e7a4-merged.mount: Deactivated successfully.
Nov 25 18:44:09 np0005535838 podman[169213]: 2025-11-25 23:44:09.207286609 +0000 UTC m=+1.173275108 container remove bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:44:09 np0005535838 systemd[1]: libpod-conmon-bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35.scope: Deactivated successfully.
Nov 25 18:44:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:44:09 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:44:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:44:09 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:44:09 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 71c11069-eda9-4fef-aa8f-649010eb9c87 does not exist
Nov 25 18:44:09 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:44:09 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:44:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v421: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:44:11 np0005535838 podman[169324]: 2025-11-25 23:44:11.272565893 +0000 UTC m=+0.089689298 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 18:44:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v422: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v423: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:44:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v424: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v425: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v426: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:44:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v427: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v428: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:44:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:44:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:44:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:44:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:44:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:44:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:44:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v429: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v430: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v431: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:44:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v432: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:33 np0005535838 podman[175921]: 2025-11-25 23:44:33.293303934 +0000 UTC m=+0.116467571 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:44:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v433: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:44:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v434: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v435: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v436: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:44:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:44:40.752 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:44:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:44:40.752 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:44:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:44:40.752 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:44:42 np0005535838 podman[180115]: 2025-11-25 23:44:42.263115248 +0000 UTC m=+0.078488994 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 18:44:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v437: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:44:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v439: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v440: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:44:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v443: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:44:56
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'volumes']
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:44:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v444: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:44:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v445: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v446: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:45:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:45:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:04 np0005535838 podman[186187]: 2025-11-25 23:45:04.265911793 +0000 UTC m=+0.086356153 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 25 18:45:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v448: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:45:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v449: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:06 np0005535838 kernel: SELinux:  Converting 2769 SID table entries...
Nov 25 18:45:06 np0005535838 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 18:45:06 np0005535838 kernel: SELinux:  policy capability open_perms=1
Nov 25 18:45:06 np0005535838 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 18:45:06 np0005535838 kernel: SELinux:  policy capability always_check_network=0
Nov 25 18:45:06 np0005535838 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 18:45:06 np0005535838 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 18:45:06 np0005535838 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 18:45:08 np0005535838 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 25 18:45:08 np0005535838 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 25 18:45:08 np0005535838 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 25 18:45:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v450: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v451: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:45:10 np0005535838 podman[186452]: 2025-11-25 23:45:10.481450696 +0000 UTC m=+0.118029773 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:45:10 np0005535838 podman[186452]: 2025-11-25 23:45:10.591068586 +0000 UTC m=+0.227647613 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:45:11 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:45:11 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:45:11 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:45:11 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:45:12 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 43793928-8cba-4a30-931c-f1cdfe973d7f does not exist
Nov 25 18:45:12 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 85588371-a430-4799-be4c-b5a7e27c81b5 does not exist
Nov 25 18:45:12 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 27570809-1dbb-4236-a604-437f6bb04b02 does not exist
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:45:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v452: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:45:12 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:45:12 np0005535838 podman[186876]: 2025-11-25 23:45:12.407484175 +0000 UTC m=+0.068120639 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 18:45:12 np0005535838 podman[187062]: 2025-11-25 23:45:12.896820632 +0000 UTC m=+0.074392976 container create f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_rosalind, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 18:45:12 np0005535838 systemd[1]: Started libpod-conmon-f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618.scope.
Nov 25 18:45:12 np0005535838 podman[187062]: 2025-11-25 23:45:12.867676738 +0000 UTC m=+0.045249102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:45:12 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:45:13 np0005535838 podman[187062]: 2025-11-25 23:45:13.007366556 +0000 UTC m=+0.184938900 container init f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:45:13 np0005535838 podman[187062]: 2025-11-25 23:45:13.019648062 +0000 UTC m=+0.197220436 container start f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_rosalind, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:45:13 np0005535838 podman[187062]: 2025-11-25 23:45:13.023427222 +0000 UTC m=+0.200999596 container attach f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:45:13 np0005535838 cranky_rosalind[187078]: 167 167
Nov 25 18:45:13 np0005535838 systemd[1]: libpod-f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618.scope: Deactivated successfully.
Nov 25 18:45:13 np0005535838 podman[187062]: 2025-11-25 23:45:13.028931688 +0000 UTC m=+0.206504042 container died f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:45:13 np0005535838 systemd[1]: var-lib-containers-storage-overlay-de67faf27eb09da5177414384b64db47e8f3c15d72aa4faef2298b89e75b3b2c-merged.mount: Deactivated successfully.
Nov 25 18:45:13 np0005535838 podman[187062]: 2025-11-25 23:45:13.073749427 +0000 UTC m=+0.251321771 container remove f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_rosalind, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:45:13 np0005535838 systemd[1]: libpod-conmon-f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618.scope: Deactivated successfully.
Nov 25 18:45:13 np0005535838 podman[187102]: 2025-11-25 23:45:13.333002499 +0000 UTC m=+0.074172120 container create 501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:45:13 np0005535838 systemd[1]: Started libpod-conmon-501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59.scope.
Nov 25 18:45:13 np0005535838 podman[187102]: 2025-11-25 23:45:13.301383989 +0000 UTC m=+0.042553660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:45:13 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:45:13 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac84d92d8c16ddb484109a296b24b7899495aa4b1d4f7743176432c18007444/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:45:13 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac84d92d8c16ddb484109a296b24b7899495aa4b1d4f7743176432c18007444/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:45:13 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac84d92d8c16ddb484109a296b24b7899495aa4b1d4f7743176432c18007444/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:45:13 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac84d92d8c16ddb484109a296b24b7899495aa4b1d4f7743176432c18007444/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:45:13 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac84d92d8c16ddb484109a296b24b7899495aa4b1d4f7743176432c18007444/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:45:13 np0005535838 podman[187102]: 2025-11-25 23:45:13.450059025 +0000 UTC m=+0.191228646 container init 501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:45:13 np0005535838 podman[187102]: 2025-11-25 23:45:13.460665077 +0000 UTC m=+0.201834688 container start 501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:45:13 np0005535838 podman[187102]: 2025-11-25 23:45:13.464904819 +0000 UTC m=+0.206074430 container attach 501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:45:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v453: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:14 np0005535838 quirky_hermann[187119]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:45:14 np0005535838 quirky_hermann[187119]: --> relative data size: 1.0
Nov 25 18:45:14 np0005535838 quirky_hermann[187119]: --> All data devices are unavailable
Nov 25 18:45:14 np0005535838 systemd[1]: libpod-501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59.scope: Deactivated successfully.
Nov 25 18:45:14 np0005535838 systemd[1]: libpod-501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59.scope: Consumed 1.088s CPU time.
Nov 25 18:45:14 np0005535838 podman[187102]: 2025-11-25 23:45:14.596835171 +0000 UTC m=+1.338004822 container died 501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 18:45:14 np0005535838 systemd[1]: var-lib-containers-storage-overlay-5ac84d92d8c16ddb484109a296b24b7899495aa4b1d4f7743176432c18007444-merged.mount: Deactivated successfully.
Nov 25 18:45:14 np0005535838 podman[187102]: 2025-11-25 23:45:14.680113022 +0000 UTC m=+1.421282613 container remove 501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:45:14 np0005535838 systemd[1]: libpod-conmon-501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59.scope: Deactivated successfully.
Nov 25 18:45:15 np0005535838 podman[187695]: 2025-11-25 23:45:15.401929408 +0000 UTC m=+0.065807417 container create d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:45:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:45:15 np0005535838 systemd[1]: Started libpod-conmon-d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0.scope.
Nov 25 18:45:15 np0005535838 podman[187695]: 2025-11-25 23:45:15.375018004 +0000 UTC m=+0.038896093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:45:15 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:45:15 np0005535838 podman[187695]: 2025-11-25 23:45:15.492993645 +0000 UTC m=+0.156871704 container init d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 18:45:15 np0005535838 podman[187695]: 2025-11-25 23:45:15.498979475 +0000 UTC m=+0.162857474 container start d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 18:45:15 np0005535838 podman[187695]: 2025-11-25 23:45:15.501873222 +0000 UTC m=+0.165751221 container attach d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:45:15 np0005535838 upbeat_kapitsa[187783]: 167 167
Nov 25 18:45:15 np0005535838 systemd[1]: libpod-d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0.scope: Deactivated successfully.
Nov 25 18:45:15 np0005535838 podman[187695]: 2025-11-25 23:45:15.506219777 +0000 UTC m=+0.170097796 container died d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:45:15 np0005535838 systemd[1]: var-lib-containers-storage-overlay-d36716eeb2fe7ca2ad3809314b42107f675ecd1f0519a755b205aef7da3330ba-merged.mount: Deactivated successfully.
Nov 25 18:45:15 np0005535838 podman[187695]: 2025-11-25 23:45:15.541613616 +0000 UTC m=+0.205491605 container remove d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:45:15 np0005535838 systemd[1]: libpod-conmon-d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0.scope: Deactivated successfully.
Nov 25 18:45:15 np0005535838 podman[187949]: 2025-11-25 23:45:15.695620243 +0000 UTC m=+0.037197448 container create 520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:45:15 np0005535838 systemd[1]: Started libpod-conmon-520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79.scope.
Nov 25 18:45:15 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:45:15 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c702683a2e6c046fd88b94804e6d933a5b9781cee3e3ce23e66afbb5a5abb885/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:45:15 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c702683a2e6c046fd88b94804e6d933a5b9781cee3e3ce23e66afbb5a5abb885/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:45:15 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c702683a2e6c046fd88b94804e6d933a5b9781cee3e3ce23e66afbb5a5abb885/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:45:15 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c702683a2e6c046fd88b94804e6d933a5b9781cee3e3ce23e66afbb5a5abb885/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:45:15 np0005535838 podman[187949]: 2025-11-25 23:45:15.678290064 +0000 UTC m=+0.019867289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:45:15 np0005535838 podman[187949]: 2025-11-25 23:45:15.777641111 +0000 UTC m=+0.119218346 container init 520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 18:45:15 np0005535838 podman[187949]: 2025-11-25 23:45:15.790377879 +0000 UTC m=+0.131955074 container start 520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lamport, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:45:15 np0005535838 podman[187949]: 2025-11-25 23:45:15.79496457 +0000 UTC m=+0.136541795 container attach 520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 18:45:15 np0005535838 systemd[1]: Stopping OpenSSH server daemon...
Nov 25 18:45:15 np0005535838 systemd[1]: sshd.service: Deactivated successfully.
Nov 25 18:45:15 np0005535838 systemd[1]: sshd.service: Unit process 187146 (sshd-session) remains running after unit stopped.
Nov 25 18:45:15 np0005535838 systemd[1]: sshd.service: Unit process 187148 (sshd-session) remains running after unit stopped.
Nov 25 18:45:15 np0005535838 systemd[1]: Stopped OpenSSH server daemon.
Nov 25 18:45:15 np0005535838 systemd[1]: sshd.service: Consumed 16.482s CPU time, 38.2M memory peak, read 564.0K from disk, written 384.0K to disk.
Nov 25 18:45:15 np0005535838 systemd[1]: Stopped target sshd-keygen.target.
Nov 25 18:45:15 np0005535838 systemd[1]: Stopping sshd-keygen.target...
Nov 25 18:45:15 np0005535838 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 18:45:15 np0005535838 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 18:45:15 np0005535838 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 18:45:15 np0005535838 systemd[1]: Reached target sshd-keygen.target.
Nov 25 18:45:15 np0005535838 systemd[1]: Starting OpenSSH server daemon...
Nov 25 18:45:15 np0005535838 systemd[1]: Started OpenSSH server daemon.
Nov 25 18:45:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v454: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]: {
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:    "0": [
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:        {
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "devices": [
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "/dev/loop3"
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            ],
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_name": "ceph_lv0",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_size": "21470642176",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "name": "ceph_lv0",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "tags": {
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.cluster_name": "ceph",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.crush_device_class": "",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.encrypted": "0",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.osd_id": "0",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.type": "block",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.vdo": "0"
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            },
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "type": "block",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "vg_name": "ceph_vg0"
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:        }
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:    ],
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:    "1": [
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:        {
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "devices": [
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "/dev/loop4"
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            ],
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_name": "ceph_lv1",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_size": "21470642176",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "name": "ceph_lv1",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "tags": {
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.cluster_name": "ceph",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.crush_device_class": "",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.encrypted": "0",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.osd_id": "1",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.type": "block",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.vdo": "0"
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            },
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "type": "block",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "vg_name": "ceph_vg1"
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:        }
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:    ],
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:    "2": [
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:        {
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "devices": [
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "/dev/loop5"
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            ],
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_name": "ceph_lv2",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_size": "21470642176",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "name": "ceph_lv2",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "tags": {
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.cluster_name": "ceph",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.crush_device_class": "",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.encrypted": "0",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.osd_id": "2",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.type": "block",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:                "ceph.vdo": "0"
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            },
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "type": "block",
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:            "vg_name": "ceph_vg2"
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:        }
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]:    ]
Nov 25 18:45:16 np0005535838 vigilant_lamport[187970]: }
Nov 25 18:45:16 np0005535838 systemd[1]: libpod-520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79.scope: Deactivated successfully.
Nov 25 18:45:16 np0005535838 podman[187949]: 2025-11-25 23:45:16.633504826 +0000 UTC m=+0.975082041 container died 520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lamport, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:45:16 np0005535838 systemd[1]: var-lib-containers-storage-overlay-c702683a2e6c046fd88b94804e6d933a5b9781cee3e3ce23e66afbb5a5abb885-merged.mount: Deactivated successfully.
Nov 25 18:45:16 np0005535838 podman[187949]: 2025-11-25 23:45:16.717847134 +0000 UTC m=+1.059424379 container remove 520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lamport, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 18:45:16 np0005535838 systemd[1]: libpod-conmon-520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79.scope: Deactivated successfully.
Nov 25 18:45:17 np0005535838 podman[188291]: 2025-11-25 23:45:17.497503577 +0000 UTC m=+0.049369711 container create f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:45:17 np0005535838 systemd[1]: Started libpod-conmon-f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27.scope.
Nov 25 18:45:17 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:45:17 np0005535838 podman[188291]: 2025-11-25 23:45:17.473092489 +0000 UTC m=+0.024958633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:45:17 np0005535838 podman[188291]: 2025-11-25 23:45:17.575889967 +0000 UTC m=+0.127756111 container init f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 18:45:17 np0005535838 podman[188291]: 2025-11-25 23:45:17.584236079 +0000 UTC m=+0.136102213 container start f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nobel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 18:45:17 np0005535838 podman[188291]: 2025-11-25 23:45:17.587232578 +0000 UTC m=+0.139098732 container attach f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 18:45:17 np0005535838 elated_nobel[188318]: 167 167
Nov 25 18:45:17 np0005535838 systemd[1]: libpod-f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27.scope: Deactivated successfully.
Nov 25 18:45:17 np0005535838 podman[188291]: 2025-11-25 23:45:17.591415409 +0000 UTC m=+0.143281543 container died f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nobel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:45:17 np0005535838 systemd[1]: var-lib-containers-storage-overlay-c66f1133381c23656896e84de449188ffbe3383a5d683f01b49b30608aee7c15-merged.mount: Deactivated successfully.
Nov 25 18:45:17 np0005535838 podman[188291]: 2025-11-25 23:45:17.633810024 +0000 UTC m=+0.185676158 container remove f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 18:45:17 np0005535838 systemd[1]: libpod-conmon-f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27.scope: Deactivated successfully.
Nov 25 18:45:17 np0005535838 podman[188368]: 2025-11-25 23:45:17.785043308 +0000 UTC m=+0.038372109 container create 574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 25 18:45:17 np0005535838 systemd[1]: Started libpod-conmon-574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41.scope.
Nov 25 18:45:17 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:45:17 np0005535838 podman[188368]: 2025-11-25 23:45:17.765921601 +0000 UTC m=+0.019250442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:45:17 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709465d388b48862742d50032e958dff4347d35ce4805ad274a8df31b496b770/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:45:17 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709465d388b48862742d50032e958dff4347d35ce4805ad274a8df31b496b770/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:45:17 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709465d388b48862742d50032e958dff4347d35ce4805ad274a8df31b496b770/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:45:17 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709465d388b48862742d50032e958dff4347d35ce4805ad274a8df31b496b770/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:45:17 np0005535838 podman[188368]: 2025-11-25 23:45:17.87627736 +0000 UTC m=+0.129606161 container init 574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:45:17 np0005535838 podman[188368]: 2025-11-25 23:45:17.882774672 +0000 UTC m=+0.136103473 container start 574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 18:45:17 np0005535838 podman[188368]: 2025-11-25 23:45:17.885581836 +0000 UTC m=+0.138910627 container attach 574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:45:18 np0005535838 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 18:45:18 np0005535838 systemd[1]: Starting man-db-cache-update.service...
Nov 25 18:45:18 np0005535838 systemd[1]: Reloading.
Nov 25 18:45:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:18 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:45:18 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:45:18 np0005535838 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 18:45:18 np0005535838 loving_panini[188386]: {
Nov 25 18:45:18 np0005535838 loving_panini[188386]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "osd_id": 2,
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "type": "bluestore"
Nov 25 18:45:18 np0005535838 loving_panini[188386]:    },
Nov 25 18:45:18 np0005535838 loving_panini[188386]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "osd_id": 1,
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "type": "bluestore"
Nov 25 18:45:18 np0005535838 loving_panini[188386]:    },
Nov 25 18:45:18 np0005535838 loving_panini[188386]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "osd_id": 0,
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:45:18 np0005535838 loving_panini[188386]:        "type": "bluestore"
Nov 25 18:45:18 np0005535838 loving_panini[188386]:    }
Nov 25 18:45:18 np0005535838 loving_panini[188386]: }
Nov 25 18:45:18 np0005535838 systemd[1]: libpod-574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41.scope: Deactivated successfully.
Nov 25 18:45:18 np0005535838 podman[188368]: 2025-11-25 23:45:18.874708239 +0000 UTC m=+1.128037080 container died 574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:45:19 np0005535838 systemd[1]: var-lib-containers-storage-overlay-709465d388b48862742d50032e958dff4347d35ce4805ad274a8df31b496b770-merged.mount: Deactivated successfully.
Nov 25 18:45:19 np0005535838 podman[188368]: 2025-11-25 23:45:19.902379574 +0000 UTC m=+2.155708405 container remove 574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 25 18:45:19 np0005535838 systemd[1]: libpod-conmon-574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41.scope: Deactivated successfully.
Nov 25 18:45:19 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:45:19 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:45:19 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:45:19 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:45:19 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 0b2489e4-b52a-4e47-bc0b-c9f6818c826d does not exist
Nov 25 18:45:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v456: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:45:21 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:45:21 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:45:21 np0005535838 python3.9[191365]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 18:45:21 np0005535838 systemd[1]: Reloading.
Nov 25 18:45:21 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:45:22 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:45:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v457: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:23 np0005535838 python3.9[192692]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 18:45:23 np0005535838 systemd[1]: Reloading.
Nov 25 18:45:23 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:45:23 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:45:24 np0005535838 python3.9[193915]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 18:45:24 np0005535838 systemd[1]: Reloading.
Nov 25 18:45:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v458: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:24 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:45:24 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:45:25 np0005535838 python3.9[195113]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 18:45:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:45:25 np0005535838 systemd[1]: Reloading.
Nov 25 18:45:25 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:45:25 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:45:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:45:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:45:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:45:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:45:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:45:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:45:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v459: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:26 np0005535838 python3.9[196462]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:26 np0005535838 systemd[1]: Reloading.
Nov 25 18:45:26 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:45:26 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:45:27 np0005535838 python3.9[197625]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:27 np0005535838 systemd[1]: Reloading.
Nov 25 18:45:28 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:45:28 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:45:28 np0005535838 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 18:45:28 np0005535838 systemd[1]: Finished man-db-cache-update.service.
Nov 25 18:45:28 np0005535838 systemd[1]: man-db-cache-update.service: Consumed 12.706s CPU time.
Nov 25 18:45:28 np0005535838 systemd[1]: run-r09d2ef183fdd4d688ef929122973b249.service: Deactivated successfully.
Nov 25 18:45:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v460: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:29 np0005535838 python3.9[198216]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:29 np0005535838 systemd[1]: Reloading.
Nov 25 18:45:29 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:45:29 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:45:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v461: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:30 np0005535838 python3.9[198406]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:45:31 np0005535838 python3.9[198561]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:31 np0005535838 systemd[1]: Reloading.
Nov 25 18:45:31 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:45:31 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:45:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v462: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:32 np0005535838 python3.9[198752]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 18:45:32 np0005535838 systemd[1]: Reloading.
Nov 25 18:45:32 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:45:32 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:45:32 np0005535838 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 25 18:45:32 np0005535838 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 25 18:45:34 np0005535838 python3.9[198946]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v463: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:34 np0005535838 podman[199073]: 2025-11-25 23:45:34.884496085 +0000 UTC m=+0.112741437 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 18:45:35 np0005535838 python3.9[199120]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:45:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v464: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:37 np0005535838 python3.9[199282]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:38 np0005535838 python3.9[199437]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v465: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:39 np0005535838 python3.9[199592]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:40 np0005535838 python3.9[199747]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v466: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:45:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:45:40.753 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:45:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:45:40.753 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:45:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:45:40.753 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:45:41 np0005535838 python3.9[199902]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:42 np0005535838 python3.9[200057]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v467: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:42 np0005535838 podman[200184]: 2025-11-25 23:45:42.790932444 +0000 UTC m=+0.112680576 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent)
Nov 25 18:45:43 np0005535838 python3.9[200232]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v468: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:45 np0005535838 python3.9[200388]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:45:45 np0005535838 python3.9[200543]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v469: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:46 np0005535838 python3.9[200698]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:47 np0005535838 python3.9[200853]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v470: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:48 np0005535838 python3.9[201008]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 18:45:49 np0005535838 python3.9[201163]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:45:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v471: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:50 np0005535838 python3.9[201315]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:45:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:45:51 np0005535838 python3.9[201467]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:45:51 np0005535838 python3.9[201619]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:45:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:52 np0005535838 python3.9[201771]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:45:53 np0005535838 python3.9[201923]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:45:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v473: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:54 np0005535838 python3.9[202075]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:45:55 np0005535838 python3.9[202200]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114353.6066053-554-34671781885842/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:45:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:45:56
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'volumes', 'images', 'vms', '.mgr', 'cephfs.cephfs.data']
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:45:56 np0005535838 python3.9[202352]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:45:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v474: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:56 np0005535838 python3.9[202479]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114355.462142-554-122363963963017/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:45:57 np0005535838 python3.9[202631]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:45:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v475: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:45:58 np0005535838 python3.9[202756]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114357.0792313-554-227852866324465/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:45:59 np0005535838 python3.9[202908]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:45:59 np0005535838 python3.9[203033]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114358.5717072-554-111722182401673/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v476: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:46:00 np0005535838 python3.9[203185]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:01 np0005535838 python3.9[203310]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114360.2035878-554-207829125034064/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:46:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:46:02 np0005535838 python3.9[203462]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v477: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:02 np0005535838 python3.9[203587]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114361.656885-554-59003449351394/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:03 np0005535838 python3.9[203739]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:04 np0005535838 python3.9[203862]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114363.1260843-554-31945535958151/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v478: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:05 np0005535838 python3.9[204014]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:05 np0005535838 podman[204039]: 2025-11-25 23:46:05.260557732 +0000 UTC m=+0.091330227 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:46:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:46:05 np0005535838 python3.9[204166]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114364.490349-554-103738279089853/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v479: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:06 np0005535838 python3.9[204318]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 25 18:46:07 np0005535838 python3.9[204471]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:07 np0005535838 python3.9[204625]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v480: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:08 np0005535838 python3.9[204777]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:09 np0005535838 python3.9[204929]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:09 np0005535838 python3.9[205081]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v481: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:46:10 np0005535838 python3.9[205233]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:11 np0005535838 python3.9[205385]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:11 np0005535838 python3.9[205537]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v482: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:12 np0005535838 python3.9[205689]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:13 np0005535838 podman[205785]: 2025-11-25 23:46:13.28578945 +0000 UTC m=+0.092948861 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 18:46:13 np0005535838 python3.9[205860]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v483: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:14 np0005535838 python3.9[206012]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:15 np0005535838 python3.9[206164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:46:15 np0005535838 python3.9[206316]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v484: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:16 np0005535838 python3.9[206468]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:17 np0005535838 python3.9[206620]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:18 np0005535838 python3.9[206743]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114377.0625043-775-197558175278093/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v485: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:19 np0005535838 python3.9[206895]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:19 np0005535838 python3.9[207018]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114378.5051255-775-111059718653022/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v486: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:20 np0005535838 python3.9[207201]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:46:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:46:20 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:46:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:46:20 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:46:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:46:21 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:46:21 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 199a0de9-ebb0-4283-b2aa-2422972e28c5 does not exist
Nov 25 18:46:21 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 44134351-52ec-428f-8d81-ad92a8913d2c does not exist
Nov 25 18:46:21 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 89d283b5-f379-455d-972a-9ecc796fbfcb does not exist
Nov 25 18:46:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:46:21 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:46:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:46:21 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:46:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:46:21 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:46:21 np0005535838 python3.9[207412]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114379.8471944-775-12685982182814/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:21 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:46:21 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:46:21 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:46:21 np0005535838 podman[207689]: 2025-11-25 23:46:21.930368547 +0000 UTC m=+0.038550106 container create a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_herschel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:46:21 np0005535838 systemd[1]: Started libpod-conmon-a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71.scope.
Nov 25 18:46:21 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:46:21 np0005535838 podman[207689]: 2025-11-25 23:46:21.99615925 +0000 UTC m=+0.104340799 container init a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_herschel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:46:22 np0005535838 podman[207689]: 2025-11-25 23:46:22.001922247 +0000 UTC m=+0.110103806 container start a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:46:22 np0005535838 podman[207689]: 2025-11-25 23:46:22.00539899 +0000 UTC m=+0.113580529 container attach a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_herschel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:46:22 np0005535838 naughty_herschel[207728]: 167 167
Nov 25 18:46:22 np0005535838 podman[207689]: 2025-11-25 23:46:21.912557944 +0000 UTC m=+0.020739483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:46:22 np0005535838 systemd[1]: libpod-a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71.scope: Deactivated successfully.
Nov 25 18:46:22 np0005535838 podman[207689]: 2025-11-25 23:46:22.008660829 +0000 UTC m=+0.116842358 container died a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_herschel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 18:46:22 np0005535838 systemd[1]: var-lib-containers-storage-overlay-8db5ab3e7314fff301d30dcd83f66c5cebf54fbaebc6eba8c7df82b5613b7ea4-merged.mount: Deactivated successfully.
Nov 25 18:46:22 np0005535838 podman[207689]: 2025-11-25 23:46:22.041467308 +0000 UTC m=+0.149648847 container remove a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_herschel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:46:22 np0005535838 systemd[1]: libpod-conmon-a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71.scope: Deactivated successfully.
Nov 25 18:46:22 np0005535838 python3.9[207724]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:22 np0005535838 podman[207749]: 2025-11-25 23:46:22.263653662 +0000 UTC m=+0.064017027 container create 8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 18:46:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v487: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:22 np0005535838 systemd[1]: Started libpod-conmon-8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c.scope.
Nov 25 18:46:22 np0005535838 podman[207749]: 2025-11-25 23:46:22.239412254 +0000 UTC m=+0.039775699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:46:22 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:46:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522923b8e4516788be07bf2ed875153ead0d80c9951b98232c1e29e0b479f92e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:46:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522923b8e4516788be07bf2ed875153ead0d80c9951b98232c1e29e0b479f92e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:46:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522923b8e4516788be07bf2ed875153ead0d80c9951b98232c1e29e0b479f92e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:46:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522923b8e4516788be07bf2ed875153ead0d80c9951b98232c1e29e0b479f92e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:46:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522923b8e4516788be07bf2ed875153ead0d80c9951b98232c1e29e0b479f92e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:46:22 np0005535838 podman[207749]: 2025-11-25 23:46:22.377513438 +0000 UTC m=+0.177876813 container init 8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 18:46:22 np0005535838 podman[207749]: 2025-11-25 23:46:22.387878418 +0000 UTC m=+0.188241813 container start 8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:46:22 np0005535838 podman[207749]: 2025-11-25 23:46:22.392290769 +0000 UTC m=+0.192654144 container attach 8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 18:46:22 np0005535838 python3.9[207893]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114381.5191815-775-111157250370951/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:23 np0005535838 fervent_sutherland[207813]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:46:23 np0005535838 fervent_sutherland[207813]: --> relative data size: 1.0
Nov 25 18:46:23 np0005535838 fervent_sutherland[207813]: --> All data devices are unavailable
Nov 25 18:46:23 np0005535838 systemd[1]: libpod-8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c.scope: Deactivated successfully.
Nov 25 18:46:23 np0005535838 systemd[1]: libpod-8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c.scope: Consumed 1.030s CPU time.
Nov 25 18:46:23 np0005535838 podman[207749]: 2025-11-25 23:46:23.483375686 +0000 UTC m=+1.283739071 container died 8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 18:46:23 np0005535838 python3.9[208061]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:23 np0005535838 systemd[1]: var-lib-containers-storage-overlay-522923b8e4516788be07bf2ed875153ead0d80c9951b98232c1e29e0b479f92e-merged.mount: Deactivated successfully.
Nov 25 18:46:23 np0005535838 podman[207749]: 2025-11-25 23:46:23.554705449 +0000 UTC m=+1.355068824 container remove 8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:46:23 np0005535838 systemd[1]: libpod-conmon-8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c.scope: Deactivated successfully.
Nov 25 18:46:24 np0005535838 python3.9[208307]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114382.9596784-775-272894014053985/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:24 np0005535838 podman[208366]: 2025-11-25 23:46:24.276079914 +0000 UTC m=+0.041177647 container create bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:46:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v488: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:24 np0005535838 systemd[1]: Started libpod-conmon-bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21.scope.
Nov 25 18:46:24 np0005535838 podman[208366]: 2025-11-25 23:46:24.257287515 +0000 UTC m=+0.022385258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:46:24 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:46:24 np0005535838 podman[208366]: 2025-11-25 23:46:24.379294732 +0000 UTC m=+0.144392505 container init bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:46:24 np0005535838 podman[208366]: 2025-11-25 23:46:24.391427001 +0000 UTC m=+0.156524764 container start bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:46:24 np0005535838 podman[208366]: 2025-11-25 23:46:24.3958027 +0000 UTC m=+0.160900433 container attach bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 18:46:24 np0005535838 systemd[1]: libpod-bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21.scope: Deactivated successfully.
Nov 25 18:46:24 np0005535838 upbeat_northcutt[208398]: 167 167
Nov 25 18:46:24 np0005535838 conmon[208398]: conmon bc7aac43cef3167398e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21.scope/container/memory.events
Nov 25 18:46:24 np0005535838 podman[208366]: 2025-11-25 23:46:24.40062228 +0000 UTC m=+0.165720043 container died bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 18:46:24 np0005535838 systemd[1]: var-lib-containers-storage-overlay-197be54d3d8e1704b220af00fa72515c7982c00982c6f3e19069d018d2a00284-merged.mount: Deactivated successfully.
Nov 25 18:46:24 np0005535838 podman[208366]: 2025-11-25 23:46:24.45038697 +0000 UTC m=+0.215484723 container remove bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 18:46:24 np0005535838 systemd[1]: libpod-conmon-bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21.scope: Deactivated successfully.
Nov 25 18:46:24 np0005535838 podman[208488]: 2025-11-25 23:46:24.668975215 +0000 UTC m=+0.052947626 container create 15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 18:46:24 np0005535838 systemd[1]: Started libpod-conmon-15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea.scope.
Nov 25 18:46:24 np0005535838 podman[208488]: 2025-11-25 23:46:24.647160003 +0000 UTC m=+0.031132414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:46:24 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:46:24 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61fec0c93a2d99398aaacaf2945300686221dae8a021377ddcbfb16319dd078/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:46:24 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61fec0c93a2d99398aaacaf2945300686221dae8a021377ddcbfb16319dd078/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:46:24 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61fec0c93a2d99398aaacaf2945300686221dae8a021377ddcbfb16319dd078/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:46:24 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61fec0c93a2d99398aaacaf2945300686221dae8a021377ddcbfb16319dd078/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:46:24 np0005535838 podman[208488]: 2025-11-25 23:46:24.784234269 +0000 UTC m=+0.168206680 container init 15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:46:24 np0005535838 podman[208488]: 2025-11-25 23:46:24.797851459 +0000 UTC m=+0.181823840 container start 15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 18:46:24 np0005535838 podman[208488]: 2025-11-25 23:46:24.801210609 +0000 UTC m=+0.185182990 container attach 15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 18:46:25 np0005535838 python3.9[208562]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:46:25 np0005535838 practical_johnson[208529]: {
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:    "0": [
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:        {
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "devices": [
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "/dev/loop3"
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            ],
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_name": "ceph_lv0",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_size": "21470642176",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "name": "ceph_lv0",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "tags": {
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.cluster_name": "ceph",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.crush_device_class": "",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.encrypted": "0",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.osd_id": "0",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.type": "block",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.vdo": "0"
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            },
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "type": "block",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "vg_name": "ceph_vg0"
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:        }
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:    ],
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:    "1": [
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:        {
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "devices": [
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "/dev/loop4"
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            ],
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_name": "ceph_lv1",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_size": "21470642176",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "name": "ceph_lv1",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "tags": {
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.cluster_name": "ceph",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.crush_device_class": "",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.encrypted": "0",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.osd_id": "1",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.type": "block",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.vdo": "0"
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            },
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "type": "block",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "vg_name": "ceph_vg1"
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:        }
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:    ],
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:    "2": [
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:        {
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "devices": [
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "/dev/loop5"
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            ],
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_name": "ceph_lv2",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_size": "21470642176",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "name": "ceph_lv2",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "tags": {
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.cluster_name": "ceph",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.crush_device_class": "",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.encrypted": "0",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.osd_id": "2",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.type": "block",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:                "ceph.vdo": "0"
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            },
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "type": "block",
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:            "vg_name": "ceph_vg2"
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:        }
Nov 25 18:46:25 np0005535838 practical_johnson[208529]:    ]
Nov 25 18:46:25 np0005535838 practical_johnson[208529]: }
Nov 25 18:46:25 np0005535838 systemd[1]: libpod-15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea.scope: Deactivated successfully.
Nov 25 18:46:25 np0005535838 podman[208488]: 2025-11-25 23:46:25.535931026 +0000 UTC m=+0.919903427 container died 15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 18:46:25 np0005535838 systemd[1]: var-lib-containers-storage-overlay-e61fec0c93a2d99398aaacaf2945300686221dae8a021377ddcbfb16319dd078-merged.mount: Deactivated successfully.
Nov 25 18:46:25 np0005535838 podman[208488]: 2025-11-25 23:46:25.610549029 +0000 UTC m=+0.994521410 container remove 15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 18:46:25 np0005535838 systemd[1]: libpod-conmon-15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea.scope: Deactivated successfully.
Nov 25 18:46:25 np0005535838 python3.9[208689]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114384.3701646-775-56314254395744/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:46:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:46:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:46:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:46:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:46:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:46:26 np0005535838 podman[208992]: 2025-11-25 23:46:26.311746607 +0000 UTC m=+0.040459768 container create 9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 18:46:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v489: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:26 np0005535838 systemd[1]: Started libpod-conmon-9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6.scope.
Nov 25 18:46:26 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:46:26 np0005535838 podman[208992]: 2025-11-25 23:46:26.293357209 +0000 UTC m=+0.022070450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:46:26 np0005535838 python3.9[208985]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:26 np0005535838 podman[208992]: 2025-11-25 23:46:26.407345778 +0000 UTC m=+0.136059029 container init 9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 18:46:26 np0005535838 podman[208992]: 2025-11-25 23:46:26.414723388 +0000 UTC m=+0.143436549 container start 9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:46:26 np0005535838 podman[208992]: 2025-11-25 23:46:26.418229004 +0000 UTC m=+0.146942205 container attach 9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:46:26 np0005535838 recursing_bose[209008]: 167 167
Nov 25 18:46:26 np0005535838 systemd[1]: libpod-9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6.scope: Deactivated successfully.
Nov 25 18:46:26 np0005535838 podman[208992]: 2025-11-25 23:46:26.420896206 +0000 UTC m=+0.149609397 container died 9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:46:26 np0005535838 systemd[1]: var-lib-containers-storage-overlay-28d9a9502b1f6aba5815a7f1663c6f1f8a9943e26db7fdcfc7ac9d87c097bcc8-merged.mount: Deactivated successfully.
Nov 25 18:46:26 np0005535838 podman[208992]: 2025-11-25 23:46:26.464770595 +0000 UTC m=+0.193483796 container remove 9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 18:46:26 np0005535838 systemd[1]: libpod-conmon-9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6.scope: Deactivated successfully.
Nov 25 18:46:26 np0005535838 podman[209084]: 2025-11-25 23:46:26.71650997 +0000 UTC m=+0.076604128 container create 668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:46:26 np0005535838 systemd[1]: Started libpod-conmon-668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f.scope.
Nov 25 18:46:26 np0005535838 podman[209084]: 2025-11-25 23:46:26.682260501 +0000 UTC m=+0.042354739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:46:26 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:46:26 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5df890663211821870406e799a66f696eb607716823b9918bc3cc79bb74ae32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:46:26 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5df890663211821870406e799a66f696eb607716823b9918bc3cc79bb74ae32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:46:26 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5df890663211821870406e799a66f696eb607716823b9918bc3cc79bb74ae32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:46:26 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5df890663211821870406e799a66f696eb607716823b9918bc3cc79bb74ae32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:46:26 np0005535838 podman[209084]: 2025-11-25 23:46:26.829511383 +0000 UTC m=+0.189605621 container init 668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:46:26 np0005535838 podman[209084]: 2025-11-25 23:46:26.837357425 +0000 UTC m=+0.197451623 container start 668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:46:26 np0005535838 podman[209084]: 2025-11-25 23:46:26.84120763 +0000 UTC m=+0.201301788 container attach 668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 18:46:27 np0005535838 python3.9[209175]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114385.920137-775-238866778892393/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:27 np0005535838 python3.9[209341]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]: {
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "osd_id": 2,
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "type": "bluestore"
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:    },
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "osd_id": 1,
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "type": "bluestore"
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:    },
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "osd_id": 0,
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:        "type": "bluestore"
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]:    }
Nov 25 18:46:27 np0005535838 keen_rhodes[209142]: }
Nov 25 18:46:27 np0005535838 systemd[1]: libpod-668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f.scope: Deactivated successfully.
Nov 25 18:46:27 np0005535838 podman[209084]: 2025-11-25 23:46:27.840903699 +0000 UTC m=+1.200997857 container died 668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:46:27 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b5df890663211821870406e799a66f696eb607716823b9918bc3cc79bb74ae32-merged.mount: Deactivated successfully.
Nov 25 18:46:27 np0005535838 podman[209084]: 2025-11-25 23:46:27.912336626 +0000 UTC m=+1.272430794 container remove 668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:46:27 np0005535838 systemd[1]: libpod-conmon-668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f.scope: Deactivated successfully.
Nov 25 18:46:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:46:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:46:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:46:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:46:27 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 0110317f-e754-48b0-82ed-769cc7332997 does not exist
Nov 25 18:46:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:28 np0005535838 python3.9[209541]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114387.2453518-775-261889011415995/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:46:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:46:29 np0005535838 python3.9[209693]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:29 np0005535838 python3.9[209816]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114388.6411607-775-116090343045594/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v491: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:46:30 np0005535838 python3.9[209968]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:31 np0005535838 python3.9[210091]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114390.0266225-775-36547474698443/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:31 np0005535838 python3.9[210243]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v492: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:32 np0005535838 python3.9[210366]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114391.384346-775-36274582867095/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:33 np0005535838 python3.9[210518]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:34 np0005535838 python3.9[210641]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114392.7747965-775-32331882837140/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v493: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:34 np0005535838 python3.9[210793]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:46:35 np0005535838 podman[210916]: 2025-11-25 23:46:35.50888919 +0000 UTC m=+0.163209292 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 18:46:35 np0005535838 python3.9[210917]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114394.2072077-775-208307381025807/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v494: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:36 np0005535838 python3.9[211094]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:46:37 np0005535838 python3.9[211217]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114395.7450414-775-181594614764895/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:37 np0005535838 python3.9[211367]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:46:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v495: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:38 np0005535838 python3.9[211522]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 25 18:46:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:46:40 np0005535838 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 25 18:46:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:46:40.753 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:46:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:46:40.755 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:46:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:46:40.755 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:46:40 np0005535838 python3.9[211678]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:41 np0005535838 python3.9[211830]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v497: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:42 np0005535838 python3.9[211982]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:43 np0005535838 python3.9[212134]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:43 np0005535838 podman[212258]: 2025-11-25 23:46:43.785914331 +0000 UTC m=+0.073787528 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 18:46:43 np0005535838 python3.9[212306]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v498: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:44 np0005535838 python3.9[212459]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:46:45 np0005535838 python3.9[212611]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v499: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:46 np0005535838 python3.9[212764]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:47 np0005535838 python3.9[212916]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:47 np0005535838 python3.9[213068]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v500: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:48 np0005535838 python3.9[213220]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:46:48 np0005535838 systemd[1]: Reloading.
Nov 25 18:46:48 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:46:48 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:46:49 np0005535838 systemd[1]: Starting libvirt logging daemon socket...
Nov 25 18:46:49 np0005535838 systemd[1]: Listening on libvirt logging daemon socket.
Nov 25 18:46:49 np0005535838 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 25 18:46:49 np0005535838 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 25 18:46:49 np0005535838 systemd[1]: Starting libvirt logging daemon...
Nov 25 18:46:49 np0005535838 systemd[1]: Started libvirt logging daemon.
Nov 25 18:46:50 np0005535838 python3.9[213414]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:46:50 np0005535838 systemd[1]: Reloading.
Nov 25 18:46:50 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:46:50 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:46:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v501: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:46:50 np0005535838 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 25 18:46:50 np0005535838 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 25 18:46:50 np0005535838 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 25 18:46:50 np0005535838 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 25 18:46:50 np0005535838 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 25 18:46:50 np0005535838 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 25 18:46:50 np0005535838 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 18:46:50 np0005535838 systemd[1]: Started libvirt nodedev daemon.
Nov 25 18:46:51 np0005535838 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 25 18:46:51 np0005535838 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 25 18:46:51 np0005535838 python3.9[213631]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:46:51 np0005535838 systemd[1]: Reloading.
Nov 25 18:46:51 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:46:51 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:46:51 np0005535838 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 25 18:46:51 np0005535838 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 25 18:46:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v502: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:52 np0005535838 setroubleshoot[213578]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a0b44441-1549-43ae-9d32-761e369081f6
Nov 25 18:46:52 np0005535838 setroubleshoot[213578]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 25 18:46:52 np0005535838 setroubleshoot[213578]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a0b44441-1549-43ae-9d32-761e369081f6
Nov 25 18:46:52 np0005535838 setroubleshoot[213578]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 25 18:46:52 np0005535838 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 25 18:46:52 np0005535838 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 25 18:46:52 np0005535838 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 25 18:46:52 np0005535838 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 25 18:46:52 np0005535838 systemd[1]: Starting libvirt proxy daemon...
Nov 25 18:46:52 np0005535838 systemd[1]: Started libvirt proxy daemon.
Nov 25 18:46:53 np0005535838 python3.9[213852]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:46:53 np0005535838 systemd[1]: Reloading.
Nov 25 18:46:54 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:46:54 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:46:54 np0005535838 systemd[1]: Listening on libvirt locking daemon socket.
Nov 25 18:46:54 np0005535838 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 25 18:46:54 np0005535838 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 25 18:46:54 np0005535838 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 25 18:46:54 np0005535838 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 25 18:46:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v503: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:54 np0005535838 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 25 18:46:54 np0005535838 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 25 18:46:54 np0005535838 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 25 18:46:54 np0005535838 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 25 18:46:54 np0005535838 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 25 18:46:54 np0005535838 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 18:46:54 np0005535838 systemd[1]: Started libvirt QEMU daemon.
Nov 25 18:46:55 np0005535838 python3.9[214067]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:46:55 np0005535838 systemd[1]: Reloading.
Nov 25 18:46:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:46:55 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:46:55 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:46:55 np0005535838 systemd[1]: Starting libvirt secret daemon socket...
Nov 25 18:46:55 np0005535838 systemd[1]: Listening on libvirt secret daemon socket.
Nov 25 18:46:55 np0005535838 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 25 18:46:55 np0005535838 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 25 18:46:55 np0005535838 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 25 18:46:55 np0005535838 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 25 18:46:55 np0005535838 systemd[1]: Starting libvirt secret daemon...
Nov 25 18:46:55 np0005535838 systemd[1]: Started libvirt secret daemon.
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:46:56
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'vms', '.mgr']
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:46:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v504: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:56 np0005535838 python3.9[214279]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:46:57 np0005535838 python3.9[214431]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 18:46:58 np0005535838 python3.9[214583]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:46:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v505: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:46:59 np0005535838 python3.9[214737]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 18:47:00 np0005535838 python3.9[214887]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:47:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v506: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:47:00 np0005535838 python3.9[215008]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114419.5057478-1133-172326382186032/.source.xml follow=False _original_basename=secret.xml.j2 checksum=0dfd54db3937ba95246ebf996592c369f4394dfb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:01 np0005535838 python3.9[215160]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 101922db-575f-58e2-980f-928050464f69#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:47:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:47:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v507: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:02 np0005535838 python3.9[215322]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:02 np0005535838 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 25 18:47:02 np0005535838 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 25 18:47:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:05 np0005535838 python3.9[215787]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:47:05 np0005535838 podman[215911]: 2025-11-25 23:47:05.922243755 +0000 UTC m=+0.166415827 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 18:47:06 np0005535838 python3.9[215959]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:47:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v509: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:06 np0005535838 python3.9[216088]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114425.3150318-1188-213143539195875/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:07 np0005535838 python3.9[216240]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:08 np0005535838 python3.9[216392]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:47:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v510: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:08 np0005535838 python3.9[216470]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:09 np0005535838 python3.9[216622]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:47:09 np0005535838 python3.9[216700]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.zs05y00_ recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v511: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:47:10 np0005535838 python3.9[216852]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:47:11 np0005535838 python3.9[216930]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:12 np0005535838 python3.9[217082]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:47:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v512: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:12 np0005535838 python3[217235]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 18:47:13 np0005535838 python3.9[217387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:47:13 np0005535838 podman[217437]: 2025-11-25 23:47:13.990850917 +0000 UTC m=+0.054808525 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 18:47:14 np0005535838 python3.9[217483]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v513: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:15 np0005535838 python3.9[217635]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:47:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:47:15 np0005535838 python3.9[217713]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:16 np0005535838 python3.9[217865]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:47:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:16 np0005535838 python3.9[217943]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:17 np0005535838 python3.9[218095]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:47:18 np0005535838 python3.9[218173]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v515: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:18 np0005535838 python3.9[218325]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:47:19 np0005535838 python3.9[218450]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114438.2151957-1313-161999911809549/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v516: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:47:20 np0005535838 python3.9[218602]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:21 np0005535838 python3.9[218754]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.660305) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114441660335, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2036, "num_deletes": 251, "total_data_size": 2341115, "memory_usage": 2389456, "flush_reason": "Manual Compaction"}
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114441675540, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2279957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9069, "largest_seqno": 11104, "table_properties": {"data_size": 2270726, "index_size": 5853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17733, "raw_average_key_size": 19, "raw_value_size": 2252369, "raw_average_value_size": 2469, "num_data_blocks": 269, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764114204, "oldest_key_time": 1764114204, "file_creation_time": 1764114441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 15332 microseconds, and 9485 cpu microseconds.
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.675600) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2279957 bytes OK
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.675657) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.677679) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.677702) EVENT_LOG_v1 {"time_micros": 1764114441677695, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.677722) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2332636, prev total WAL file size 2332636, number of live WAL files 2.
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.678879) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2226KB)], [26(4609KB)]
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114441678951, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 6999812, "oldest_snapshot_seqno": -1}
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3208 keys, 5898567 bytes, temperature: kUnknown
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114441727752, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 5898567, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5872637, "index_size": 16796, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 73947, "raw_average_key_size": 23, "raw_value_size": 5810768, "raw_average_value_size": 1811, "num_data_blocks": 741, "num_entries": 3208, "num_filter_entries": 3208, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764114441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.728085) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 5898567 bytes
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.729716) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.1 rd, 120.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 4.5 +0.0 blob) out(5.6 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 3722, records dropped: 514 output_compression: NoCompression
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.729747) EVENT_LOG_v1 {"time_micros": 1764114441729732, "job": 10, "event": "compaction_finished", "compaction_time_micros": 48928, "compaction_time_cpu_micros": 30451, "output_level": 6, "num_output_files": 1, "total_output_size": 5898567, "num_input_records": 3722, "num_output_records": 3208, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114441730686, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114441732507, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.678767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.732583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.732826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.732829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.732832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:47:21 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.732835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:47:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v517: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:22 np0005535838 python3.9[218909]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:23 np0005535838 python3.9[219061]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:47:24 np0005535838 python3.9[219214]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:47:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v518: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:25 np0005535838 python3.9[219368]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:47:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:47:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:47:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:47:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:47:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:47:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:47:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:47:26 np0005535838 python3.9[219523]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v519: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:26 np0005535838 python3.9[219675]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:47:27 np0005535838 python3.9[219798]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114446.3719506-1385-90066066387964/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v520: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:28 np0005535838 python3.9[219974]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:47:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:47:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:47:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:47:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:47:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:47:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:47:28 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 6dfed50b-51db-4fb3-ba6d-90fdd6623b43 does not exist
Nov 25 18:47:28 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 7d435c94-08f5-434e-9c93-96f15fab3652 does not exist
Nov 25 18:47:28 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev b2ccf56b-882e-4af5-b230-1a10eb0d140f does not exist
Nov 25 18:47:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:47:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:47:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:47:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:47:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:47:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:47:29 np0005535838 python3.9[220204]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114447.8088746-1400-19216227534575/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:29 np0005535838 podman[220438]: 2025-11-25 23:47:29.555067217 +0000 UTC m=+0.041627136 container create 0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mendeleev, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:47:29 np0005535838 systemd[1]: Started libpod-conmon-0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b.scope.
Nov 25 18:47:29 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:47:29 np0005535838 podman[220438]: 2025-11-25 23:47:29.534546762 +0000 UTC m=+0.021106721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:47:29 np0005535838 podman[220438]: 2025-11-25 23:47:29.630738255 +0000 UTC m=+0.117298184 container init 0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mendeleev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 18:47:29 np0005535838 podman[220438]: 2025-11-25 23:47:29.636080407 +0000 UTC m=+0.122640316 container start 0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mendeleev, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 18:47:29 np0005535838 podman[220438]: 2025-11-25 23:47:29.639452586 +0000 UTC m=+0.126012525 container attach 0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mendeleev, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 18:47:29 np0005535838 agitated_mendeleev[220480]: 167 167
Nov 25 18:47:29 np0005535838 systemd[1]: libpod-0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b.scope: Deactivated successfully.
Nov 25 18:47:29 np0005535838 podman[220438]: 2025-11-25 23:47:29.643131184 +0000 UTC m=+0.129691103 container died 0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mendeleev, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:47:29 np0005535838 systemd[1]: var-lib-containers-storage-overlay-1ec4b9fd06cd33e9efdd00c36d4c19afbdcacc44a62155508549cb935a8f9bef-merged.mount: Deactivated successfully.
Nov 25 18:47:29 np0005535838 podman[220438]: 2025-11-25 23:47:29.688259071 +0000 UTC m=+0.174818980 container remove 0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mendeleev, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 18:47:29 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:47:29 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:47:29 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:47:29 np0005535838 systemd[1]: libpod-conmon-0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b.scope: Deactivated successfully.
Nov 25 18:47:29 np0005535838 podman[220539]: 2025-11-25 23:47:29.857354408 +0000 UTC m=+0.049731971 container create ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:47:29 np0005535838 systemd[1]: Started libpod-conmon-ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365.scope.
Nov 25 18:47:29 np0005535838 python3.9[220533]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:47:29 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:47:29 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f1501e18db30045fd9025befbad2abbd5eea486d1a12703c0620776ef9018b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:47:29 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f1501e18db30045fd9025befbad2abbd5eea486d1a12703c0620776ef9018b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:47:29 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f1501e18db30045fd9025befbad2abbd5eea486d1a12703c0620776ef9018b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:47:29 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f1501e18db30045fd9025befbad2abbd5eea486d1a12703c0620776ef9018b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:47:29 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f1501e18db30045fd9025befbad2abbd5eea486d1a12703c0620776ef9018b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:47:29 np0005535838 podman[220539]: 2025-11-25 23:47:29.834556823 +0000 UTC m=+0.026934416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:47:29 np0005535838 podman[220539]: 2025-11-25 23:47:29.945291861 +0000 UTC m=+0.137669434 container init ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 18:47:29 np0005535838 podman[220539]: 2025-11-25 23:47:29.956526679 +0000 UTC m=+0.148904242 container start ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:47:29 np0005535838 podman[220539]: 2025-11-25 23:47:29.959921869 +0000 UTC m=+0.152299432 container attach ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:47:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v521: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:47:30 np0005535838 python3.9[220683]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114449.388985-1415-247234218831530/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:47:30 np0005535838 quizzical_wing[220556]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:47:30 np0005535838 quizzical_wing[220556]: --> relative data size: 1.0
Nov 25 18:47:30 np0005535838 quizzical_wing[220556]: --> All data devices are unavailable
Nov 25 18:47:30 np0005535838 systemd[1]: libpod-ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365.scope: Deactivated successfully.
Nov 25 18:47:30 np0005535838 podman[220539]: 2025-11-25 23:47:30.99001331 +0000 UTC m=+1.182390903 container died ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 18:47:31 np0005535838 systemd[1]: var-lib-containers-storage-overlay-72f1501e18db30045fd9025befbad2abbd5eea486d1a12703c0620776ef9018b-merged.mount: Deactivated successfully.
Nov 25 18:47:31 np0005535838 podman[220539]: 2025-11-25 23:47:31.108394432 +0000 UTC m=+1.300772035 container remove ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:47:31 np0005535838 systemd[1]: libpod-conmon-ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365.scope: Deactivated successfully.
Nov 25 18:47:31 np0005535838 python3.9[220885]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:47:31 np0005535838 systemd[1]: Reloading.
Nov 25 18:47:31 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:47:31 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:47:31 np0005535838 podman[221048]: 2025-11-25 23:47:31.831533968 +0000 UTC m=+0.053628674 container create 6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:47:31 np0005535838 systemd[1]: Reached target edpm_libvirt.target.
Nov 25 18:47:31 np0005535838 systemd[1]: Started libpod-conmon-6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441.scope.
Nov 25 18:47:31 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:47:31 np0005535838 podman[221048]: 2025-11-25 23:47:31.910367589 +0000 UTC m=+0.132462305 container init 6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:47:31 np0005535838 podman[221048]: 2025-11-25 23:47:31.81766854 +0000 UTC m=+0.039763246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:47:31 np0005535838 podman[221048]: 2025-11-25 23:47:31.916935424 +0000 UTC m=+0.139030110 container start 6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 18:47:31 np0005535838 podman[221048]: 2025-11-25 23:47:31.919901372 +0000 UTC m=+0.141996088 container attach 6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:47:31 np0005535838 exciting_archimedes[221067]: 167 167
Nov 25 18:47:31 np0005535838 systemd[1]: libpod-6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441.scope: Deactivated successfully.
Nov 25 18:47:31 np0005535838 podman[221048]: 2025-11-25 23:47:31.923078436 +0000 UTC m=+0.145173122 container died 6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:47:31 np0005535838 systemd[1]: var-lib-containers-storage-overlay-f22404195051bb93b8effc5b15dd6f4612eb9818105898794fab39ad807a8a4e-merged.mount: Deactivated successfully.
Nov 25 18:47:31 np0005535838 podman[221048]: 2025-11-25 23:47:31.955151277 +0000 UTC m=+0.177245963 container remove 6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 18:47:31 np0005535838 systemd[1]: libpod-conmon-6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441.scope: Deactivated successfully.
Nov 25 18:47:32 np0005535838 podman[221139]: 2025-11-25 23:47:32.102679392 +0000 UTC m=+0.041018800 container create 0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_neumann, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:47:32 np0005535838 systemd[1]: Started libpod-conmon-0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8.scope.
Nov 25 18:47:32 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:47:32 np0005535838 podman[221139]: 2025-11-25 23:47:32.084809708 +0000 UTC m=+0.023149086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:47:32 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee98f585291b40e89044453c08febece093b4852a6b074a8f59a6a1d3a2a20e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:47:32 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee98f585291b40e89044453c08febece093b4852a6b074a8f59a6a1d3a2a20e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:47:32 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee98f585291b40e89044453c08febece093b4852a6b074a8f59a6a1d3a2a20e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:47:32 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee98f585291b40e89044453c08febece093b4852a6b074a8f59a6a1d3a2a20e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:47:32 np0005535838 podman[221139]: 2025-11-25 23:47:32.23975953 +0000 UTC m=+0.178098918 container init 0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 18:47:32 np0005535838 podman[221139]: 2025-11-25 23:47:32.248326077 +0000 UTC m=+0.186665435 container start 0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_neumann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 18:47:32 np0005535838 podman[221139]: 2025-11-25 23:47:32.26087616 +0000 UTC m=+0.199215568 container attach 0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_neumann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:47:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v522: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:32 np0005535838 python3.9[221264]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 18:47:32 np0005535838 systemd[1]: Reloading.
Nov 25 18:47:32 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:47:32 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]: {
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:    "0": [
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:        {
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "devices": [
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "/dev/loop3"
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            ],
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_name": "ceph_lv0",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_size": "21470642176",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "name": "ceph_lv0",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "tags": {
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.cluster_name": "ceph",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.crush_device_class": "",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.encrypted": "0",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.osd_id": "0",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.type": "block",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.vdo": "0"
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            },
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "type": "block",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "vg_name": "ceph_vg0"
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:        }
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:    ],
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:    "1": [
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:        {
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "devices": [
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "/dev/loop4"
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            ],
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_name": "ceph_lv1",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_size": "21470642176",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "name": "ceph_lv1",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "tags": {
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.cluster_name": "ceph",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.crush_device_class": "",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.encrypted": "0",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.osd_id": "1",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.type": "block",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.vdo": "0"
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            },
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "type": "block",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "vg_name": "ceph_vg1"
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:        }
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:    ],
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:    "2": [
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:        {
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "devices": [
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "/dev/loop5"
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            ],
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_name": "ceph_lv2",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_size": "21470642176",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "name": "ceph_lv2",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "tags": {
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.cluster_name": "ceph",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.crush_device_class": "",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.encrypted": "0",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.osd_id": "2",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.type": "block",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:                "ceph.vdo": "0"
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            },
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "type": "block",
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:            "vg_name": "ceph_vg2"
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:        }
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]:    ]
Nov 25 18:47:32 np0005535838 frosty_neumann[221184]: }
Nov 25 18:47:32 np0005535838 podman[221139]: 2025-11-25 23:47:32.958403117 +0000 UTC m=+0.896742475 container died 0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_neumann, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:47:33 np0005535838 systemd[1]: libpod-0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8.scope: Deactivated successfully.
Nov 25 18:47:33 np0005535838 systemd[1]: var-lib-containers-storage-overlay-ee98f585291b40e89044453c08febece093b4852a6b074a8f59a6a1d3a2a20e4-merged.mount: Deactivated successfully.
Nov 25 18:47:33 np0005535838 podman[221139]: 2025-11-25 23:47:33.205849422 +0000 UTC m=+1.144188820 container remove 0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_neumann, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:47:33 np0005535838 systemd[1]: libpod-conmon-0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8.scope: Deactivated successfully.
Nov 25 18:47:34 np0005535838 podman[221459]: 2025-11-25 23:47:34.019692356 +0000 UTC m=+0.061982616 container create 06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 18:47:34 np0005535838 systemd[1]: Started libpod-conmon-06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927.scope.
Nov 25 18:47:34 np0005535838 podman[221459]: 2025-11-25 23:47:33.99462339 +0000 UTC m=+0.036913700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:47:34 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:47:34 np0005535838 podman[221459]: 2025-11-25 23:47:34.124698922 +0000 UTC m=+0.166989162 container init 06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_stonebraker, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 18:47:34 np0005535838 podman[221459]: 2025-11-25 23:47:34.134374358 +0000 UTC m=+0.176664618 container start 06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_stonebraker, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:47:34 np0005535838 podman[221459]: 2025-11-25 23:47:34.138634281 +0000 UTC m=+0.180924521 container attach 06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_stonebraker, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:47:34 np0005535838 festive_stonebraker[221477]: 167 167
Nov 25 18:47:34 np0005535838 systemd[1]: libpod-06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927.scope: Deactivated successfully.
Nov 25 18:47:34 np0005535838 conmon[221477]: conmon 06ebaab5236894bccf83 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927.scope/container/memory.events
Nov 25 18:47:34 np0005535838 podman[221459]: 2025-11-25 23:47:34.144404564 +0000 UTC m=+0.186694794 container died 06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_stonebraker, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 18:47:34 np0005535838 systemd[1]: var-lib-containers-storage-overlay-60e59a7c48d658790d0620432e8d60b073618f187c98b5a72d42cd6153b892e0-merged.mount: Deactivated successfully.
Nov 25 18:47:34 np0005535838 podman[221459]: 2025-11-25 23:47:34.191995477 +0000 UTC m=+0.234285737 container remove 06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 18:47:34 np0005535838 systemd[1]: libpod-conmon-06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927.scope: Deactivated successfully.
Nov 25 18:47:34 np0005535838 systemd[1]: Reloading.
Nov 25 18:47:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v523: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:34 np0005535838 podman[221502]: 2025-11-25 23:47:34.37451457 +0000 UTC m=+0.057983090 container create de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:47:34 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:47:34 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:47:34 np0005535838 podman[221502]: 2025-11-25 23:47:34.346830225 +0000 UTC m=+0.030298805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:47:34 np0005535838 systemd[1]: Started libpod-conmon-de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542.scope.
Nov 25 18:47:34 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:47:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2639ceb05ff352a4fffa2dfb6b0f09c3821ec8cd8e4248ea9b9ea2a4075620e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:47:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2639ceb05ff352a4fffa2dfb6b0f09c3821ec8cd8e4248ea9b9ea2a4075620e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:47:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2639ceb05ff352a4fffa2dfb6b0f09c3821ec8cd8e4248ea9b9ea2a4075620e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:47:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2639ceb05ff352a4fffa2dfb6b0f09c3821ec8cd8e4248ea9b9ea2a4075620e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:47:34 np0005535838 podman[221502]: 2025-11-25 23:47:34.728676086 +0000 UTC m=+0.412144696 container init de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:47:34 np0005535838 podman[221502]: 2025-11-25 23:47:34.744348302 +0000 UTC m=+0.427816862 container start de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:47:34 np0005535838 podman[221502]: 2025-11-25 23:47:34.747824965 +0000 UTC m=+0.431293575 container attach de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:47:35 np0005535838 systemd[1]: session-49.scope: Deactivated successfully.
Nov 25 18:47:35 np0005535838 systemd[1]: session-49.scope: Consumed 3min 52.939s CPU time.
Nov 25 18:47:35 np0005535838 systemd-logind[789]: Session 49 logged out. Waiting for processes to exit.
Nov 25 18:47:35 np0005535838 systemd-logind[789]: Removed session 49.
Nov 25 18:47:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]: {
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "osd_id": 2,
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "type": "bluestore"
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:    },
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "osd_id": 1,
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "type": "bluestore"
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:    },
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "osd_id": 0,
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:        "type": "bluestore"
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]:    }
Nov 25 18:47:35 np0005535838 optimistic_taussig[221551]: }
Nov 25 18:47:35 np0005535838 systemd[1]: libpod-de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542.scope: Deactivated successfully.
Nov 25 18:47:35 np0005535838 systemd[1]: libpod-de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542.scope: Consumed 1.083s CPU time.
Nov 25 18:47:35 np0005535838 podman[221502]: 2025-11-25 23:47:35.819086368 +0000 UTC m=+1.502554928 container died de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 18:47:35 np0005535838 systemd[1]: var-lib-containers-storage-overlay-2639ceb05ff352a4fffa2dfb6b0f09c3821ec8cd8e4248ea9b9ea2a4075620e1-merged.mount: Deactivated successfully.
Nov 25 18:47:35 np0005535838 podman[221502]: 2025-11-25 23:47:35.887244367 +0000 UTC m=+1.570712907 container remove de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_taussig, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:47:35 np0005535838 systemd[1]: libpod-conmon-de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542.scope: Deactivated successfully.
Nov 25 18:47:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:47:35 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:47:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:47:35 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:47:35 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 86f9501c-2e34-4d39-b870-c03c132443e2 does not exist
Nov 25 18:47:36 np0005535838 podman[221647]: 2025-11-25 23:47:36.238985879 +0000 UTC m=+0.134926831 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:47:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:36 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:47:36 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:47:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v525: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v526: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:47:40 np0005535838 systemd-logind[789]: New session 50 of user zuul.
Nov 25 18:47:40 np0005535838 systemd[1]: Started Session 50 of User zuul.
Nov 25 18:47:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:47:40.754 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:47:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:47:40.756 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:47:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:47:40.756 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:47:41 np0005535838 python3.9[221850]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:47:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:43 np0005535838 python3.9[222004]: ansible-ansible.builtin.service_facts Invoked
Nov 25 18:47:43 np0005535838 network[222021]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 18:47:43 np0005535838 network[222022]: 'network-scripts' will be removed from distribution in near future.
Nov 25 18:47:43 np0005535838 network[222023]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 18:47:44 np0005535838 podman[222030]: 2025-11-25 23:47:44.281658619 +0000 UTC m=+0.095657825 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 18:47:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v528: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:47:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v529: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v530: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:49 np0005535838 python3.9[222314]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 18:47:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v531: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:50 np0005535838 python3.9[222398]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:47:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:47:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v532: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v533: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:47:56
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'images', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.meta']
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:47:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:56 np0005535838 python3.9[222555]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:47:57 np0005535838 python3.9[222707]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:47:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v535: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:47:58 np0005535838 python3.9[222860]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:47:59 np0005535838 python3.9[223012]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:48:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:48:00 np0005535838 python3.9[223165]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:48:01 np0005535838 python3.9[223288]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114479.8658173-95-15858991582194/.source.iscsi _original_basename=.jbmbs7j9 follow=False checksum=1bf7c32893e80615accff856e371c26699a7fd74 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:48:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:48:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v537: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:02 np0005535838 python3.9[223440]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:03 np0005535838 python3.9[223592]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v538: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:04 np0005535838 python3.9[223744]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:48:04 np0005535838 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 25 18:48:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:48:05 np0005535838 python3.9[223900]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:48:05 np0005535838 systemd[1]: Reloading.
Nov 25 18:48:05 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:48:05 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:48:06 np0005535838 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 18:48:06 np0005535838 systemd[1]: Starting Open-iSCSI...
Nov 25 18:48:06 np0005535838 kernel: Loading iSCSI transport class v2.0-870.
Nov 25 18:48:06 np0005535838 systemd[1]: Started Open-iSCSI.
Nov 25 18:48:06 np0005535838 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 25 18:48:06 np0005535838 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 25 18:48:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:07 np0005535838 podman[224073]: 2025-11-25 23:48:07.008787756 +0000 UTC m=+0.144522900 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 18:48:07 np0005535838 python3.9[224121]: ansible-ansible.builtin.service_facts Invoked
Nov 25 18:48:07 np0005535838 network[224144]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 18:48:07 np0005535838 network[224145]: 'network-scripts' will be removed from distribution in near future.
Nov 25 18:48:07 np0005535838 network[224146]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 18:48:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v540: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v541: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:48:12 np0005535838 python3.9[224418]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 18:48:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:13 np0005535838 python3.9[224570]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 25 18:48:14 np0005535838 python3.9[224726]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:48:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:14 np0005535838 podman[224821]: 2025-11-25 23:48:14.64092085 +0000 UTC m=+0.062705633 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 18:48:14 np0005535838 python3.9[224869]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114493.491119-172-10177492081552/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:48:15 np0005535838 python3.9[225021]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v544: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:17 np0005535838 python3.9[225173]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:48:18 np0005535838 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 18:48:18 np0005535838 systemd[1]: Stopped Load Kernel Modules.
Nov 25 18:48:18 np0005535838 systemd[1]: Stopping Load Kernel Modules...
Nov 25 18:48:18 np0005535838 systemd[1]: Starting Load Kernel Modules...
Nov 25 18:48:18 np0005535838 systemd[1]: Finished Load Kernel Modules.
Nov 25 18:48:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v545: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:18 np0005535838 python3.9[225329]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:48:19 np0005535838 python3.9[225481]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:48:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v546: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:48:20 np0005535838 python3.9[225633]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:48:21 np0005535838 python3.9[225785]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:48:22 np0005535838 python3.9[225908]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114500.9145272-230-83199037616165/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v547: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:22 np0005535838 python3.9[226060]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:48:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v548: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:24 np0005535838 python3.9[226213]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:48:25 np0005535838 python3.9[226365]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:48:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:48:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:48:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:48:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:48:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:48:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:26 np0005535838 python3.9[226517]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:27 np0005535838 python3.9[226669]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:28 np0005535838 python3.9[226821]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v550: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:29 np0005535838 python3.9[226973]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:29 np0005535838 python3.9[227125]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v551: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:48:30 np0005535838 python3.9[227277]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:48:31 np0005535838 python3.9[227431]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:32 np0005535838 python3.9[227583]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:48:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v552: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:33 np0005535838 python3.9[227735]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:48:33 np0005535838 python3.9[227813]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:48:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v553: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:34 np0005535838 python3.9[227965]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:48:34 np0005535838 python3.9[228043]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:48:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:48:35 np0005535838 python3.9[228195]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v554: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:36 np0005535838 python3.9[228372]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:48:36 np0005535838 python3.9[228539]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:48:37 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 2f023514-b421-437b-a9ac-519d1fcd0e26 does not exist
Nov 25 18:48:37 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 4fe13760-bfed-47ee-9742-0afdd5fb7c67 does not exist
Nov 25 18:48:37 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 76465874-22b1-4e6a-b601-0acc79d16311 does not exist
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:48:37 np0005535838 podman[228605]: 2025-11-25 23:48:37.232787051 +0000 UTC m=+0.076721653 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:48:37 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:48:37 np0005535838 python3.9[228837]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:48:37 np0005535838 podman[228877]: 2025-11-25 23:48:37.730010695 +0000 UTC m=+0.049796220 container create 75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_satoshi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 18:48:37 np0005535838 systemd[1]: Started libpod-conmon-75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9.scope.
Nov 25 18:48:37 np0005535838 podman[228877]: 2025-11-25 23:48:37.703326378 +0000 UTC m=+0.023111953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:48:37 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:48:37 np0005535838 podman[228877]: 2025-11-25 23:48:37.835695916 +0000 UTC m=+0.155481421 container init 75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 18:48:37 np0005535838 podman[228877]: 2025-11-25 23:48:37.84263808 +0000 UTC m=+0.162423585 container start 75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:48:37 np0005535838 podman[228877]: 2025-11-25 23:48:37.84679189 +0000 UTC m=+0.166577375 container attach 75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_satoshi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 18:48:37 np0005535838 dazzling_satoshi[228917]: 167 167
Nov 25 18:48:37 np0005535838 systemd[1]: libpod-75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9.scope: Deactivated successfully.
Nov 25 18:48:37 np0005535838 podman[228877]: 2025-11-25 23:48:37.849220975 +0000 UTC m=+0.169006480 container died 75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_satoshi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 18:48:37 np0005535838 systemd[1]: var-lib-containers-storage-overlay-7395c57757e58952d6e8a41f01282c69d68f9ab7d070e7050c27b7d8fa82befa-merged.mount: Deactivated successfully.
Nov 25 18:48:37 np0005535838 podman[228877]: 2025-11-25 23:48:37.888153245 +0000 UTC m=+0.207938730 container remove 75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_satoshi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:48:37 np0005535838 systemd[1]: libpod-conmon-75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9.scope: Deactivated successfully.
Nov 25 18:48:38 np0005535838 podman[228993]: 2025-11-25 23:48:38.036414344 +0000 UTC m=+0.037228137 container create d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:48:38 np0005535838 systemd[1]: Started libpod-conmon-d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20.scope.
Nov 25 18:48:38 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:48:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe62e2b7021c2d8b4252bca24e93abb6590baa86d9d93e57fa500f5b2dc6c33e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:48:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe62e2b7021c2d8b4252bca24e93abb6590baa86d9d93e57fa500f5b2dc6c33e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:48:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe62e2b7021c2d8b4252bca24e93abb6590baa86d9d93e57fa500f5b2dc6c33e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:48:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe62e2b7021c2d8b4252bca24e93abb6590baa86d9d93e57fa500f5b2dc6c33e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:48:38 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe62e2b7021c2d8b4252bca24e93abb6590baa86d9d93e57fa500f5b2dc6c33e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:48:38 np0005535838 python3.9[228987]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:38 np0005535838 podman[228993]: 2025-11-25 23:48:38.019593898 +0000 UTC m=+0.020407731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:48:38 np0005535838 podman[228993]: 2025-11-25 23:48:38.128995986 +0000 UTC m=+0.129809809 container init d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:48:38 np0005535838 podman[228993]: 2025-11-25 23:48:38.13666903 +0000 UTC m=+0.137482813 container start d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 18:48:38 np0005535838 podman[228993]: 2025-11-25 23:48:38.139481655 +0000 UTC m=+0.140295448 container attach d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 18:48:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:38 np0005535838 python3.9[229166]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:48:39 np0005535838 systemd[1]: Reloading.
Nov 25 18:48:39 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:48:39 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:48:39 np0005535838 flamboyant_chaum[229010]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:48:39 np0005535838 flamboyant_chaum[229010]: --> relative data size: 1.0
Nov 25 18:48:39 np0005535838 flamboyant_chaum[229010]: --> All data devices are unavailable
Nov 25 18:48:39 np0005535838 podman[228993]: 2025-11-25 23:48:39.198255707 +0000 UTC m=+1.199069490 container died d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:48:39 np0005535838 systemd[1]: libpod-d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20.scope: Deactivated successfully.
Nov 25 18:48:39 np0005535838 systemd[1]: libpod-d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20.scope: Consumed 1.000s CPU time.
Nov 25 18:48:39 np0005535838 systemd[1]: var-lib-containers-storage-overlay-fe62e2b7021c2d8b4252bca24e93abb6590baa86d9d93e57fa500f5b2dc6c33e-merged.mount: Deactivated successfully.
Nov 25 18:48:39 np0005535838 podman[228993]: 2025-11-25 23:48:39.416033927 +0000 UTC m=+1.416847710 container remove d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:48:39 np0005535838 systemd[1]: libpod-conmon-d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20.scope: Deactivated successfully.
Nov 25 18:48:40 np0005535838 podman[229534]: 2025-11-25 23:48:40.206273894 +0000 UTC m=+0.048152656 container create 987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 18:48:40 np0005535838 systemd[1]: Started libpod-conmon-987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959.scope.
Nov 25 18:48:40 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:48:40 np0005535838 python3.9[229520]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:48:40 np0005535838 podman[229534]: 2025-11-25 23:48:40.187235919 +0000 UTC m=+0.029114711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:48:40 np0005535838 podman[229534]: 2025-11-25 23:48:40.296941697 +0000 UTC m=+0.138820499 container init 987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 18:48:40 np0005535838 podman[229534]: 2025-11-25 23:48:40.3046302 +0000 UTC m=+0.146508952 container start 987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:48:40 np0005535838 podman[229534]: 2025-11-25 23:48:40.307514036 +0000 UTC m=+0.149392898 container attach 987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 18:48:40 np0005535838 zen_elion[229550]: 167 167
Nov 25 18:48:40 np0005535838 systemd[1]: libpod-987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959.scope: Deactivated successfully.
Nov 25 18:48:40 np0005535838 podman[229534]: 2025-11-25 23:48:40.313452804 +0000 UTC m=+0.155331586 container died 987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:48:40 np0005535838 systemd[1]: var-lib-containers-storage-overlay-1defb88753c0853a087f2845bfe85fb0c129d72836d117dfb9c2c448b5c3e53c-merged.mount: Deactivated successfully.
Nov 25 18:48:40 np0005535838 podman[229534]: 2025-11-25 23:48:40.364778884 +0000 UTC m=+0.206657646 container remove 987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:48:40 np0005535838 systemd[1]: libpod-conmon-987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959.scope: Deactivated successfully.
Nov 25 18:48:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v556: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:48:40 np0005535838 podman[229604]: 2025-11-25 23:48:40.548929783 +0000 UTC m=+0.051075955 container create 083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:48:40 np0005535838 systemd[1]: Started libpod-conmon-083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6.scope.
Nov 25 18:48:40 np0005535838 podman[229604]: 2025-11-25 23:48:40.526745105 +0000 UTC m=+0.028891297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:48:40 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:48:40 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12b18be1f0c93aa4eb12bf08b10fc50d51577a99af39b329bdf65ef21672ab9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:48:40 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12b18be1f0c93aa4eb12bf08b10fc50d51577a99af39b329bdf65ef21672ab9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:48:40 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12b18be1f0c93aa4eb12bf08b10fc50d51577a99af39b329bdf65ef21672ab9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:48:40 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12b18be1f0c93aa4eb12bf08b10fc50d51577a99af39b329bdf65ef21672ab9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:48:40 np0005535838 podman[229604]: 2025-11-25 23:48:40.643966731 +0000 UTC m=+0.146112873 container init 083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:48:40 np0005535838 podman[229604]: 2025-11-25 23:48:40.659905363 +0000 UTC m=+0.162051535 container start 083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 18:48:40 np0005535838 podman[229604]: 2025-11-25 23:48:40.663890809 +0000 UTC m=+0.166036991 container attach 083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 18:48:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:48:40.755 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:48:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:48:40.757 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:48:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:48:40.757 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:48:40 np0005535838 python3.9[229669]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:41 np0005535838 trusting_germain[229661]: {
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:    "0": [
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:        {
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "devices": [
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "/dev/loop3"
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            ],
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_name": "ceph_lv0",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_size": "21470642176",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "name": "ceph_lv0",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "tags": {
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.cluster_name": "ceph",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.crush_device_class": "",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.encrypted": "0",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.osd_id": "0",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.type": "block",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.vdo": "0"
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            },
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "type": "block",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "vg_name": "ceph_vg0"
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:        }
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:    ],
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:    "1": [
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:        {
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "devices": [
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "/dev/loop4"
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            ],
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_name": "ceph_lv1",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_size": "21470642176",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "name": "ceph_lv1",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "tags": {
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.cluster_name": "ceph",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.crush_device_class": "",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.encrypted": "0",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.osd_id": "1",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.type": "block",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.vdo": "0"
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            },
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "type": "block",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "vg_name": "ceph_vg1"
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:        }
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:    ],
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:    "2": [
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:        {
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "devices": [
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "/dev/loop5"
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            ],
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_name": "ceph_lv2",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_size": "21470642176",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "name": "ceph_lv2",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "tags": {
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.cluster_name": "ceph",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.crush_device_class": "",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.encrypted": "0",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.osd_id": "2",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.type": "block",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:                "ceph.vdo": "0"
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            },
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "type": "block",
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:            "vg_name": "ceph_vg2"
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:        }
Nov 25 18:48:41 np0005535838 trusting_germain[229661]:    ]
Nov 25 18:48:41 np0005535838 trusting_germain[229661]: }
Nov 25 18:48:41 np0005535838 systemd[1]: libpod-083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6.scope: Deactivated successfully.
Nov 25 18:48:41 np0005535838 podman[229604]: 2025-11-25 23:48:41.443072623 +0000 UTC m=+0.945218765 container died 083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 18:48:41 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b12b18be1f0c93aa4eb12bf08b10fc50d51577a99af39b329bdf65ef21672ab9-merged.mount: Deactivated successfully.
Nov 25 18:48:41 np0005535838 podman[229604]: 2025-11-25 23:48:41.504567832 +0000 UTC m=+1.006713984 container remove 083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 18:48:41 np0005535838 systemd[1]: libpod-conmon-083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6.scope: Deactivated successfully.
Nov 25 18:48:41 np0005535838 python3.9[229827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:48:42 np0005535838 python3.9[230015]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:42 np0005535838 podman[230082]: 2025-11-25 23:48:42.224482847 +0000 UTC m=+0.039802466 container create 14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 18:48:42 np0005535838 systemd[1]: Started libpod-conmon-14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be.scope.
Nov 25 18:48:42 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:48:42 np0005535838 podman[230082]: 2025-11-25 23:48:42.206213312 +0000 UTC m=+0.021532911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:48:42 np0005535838 podman[230082]: 2025-11-25 23:48:42.310721652 +0000 UTC m=+0.126041321 container init 14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:48:42 np0005535838 podman[230082]: 2025-11-25 23:48:42.322034072 +0000 UTC m=+0.137353691 container start 14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 18:48:42 np0005535838 podman[230082]: 2025-11-25 23:48:42.327108186 +0000 UTC m=+0.142427855 container attach 14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:48:42 np0005535838 nifty_thompson[230098]: 167 167
Nov 25 18:48:42 np0005535838 systemd[1]: libpod-14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be.scope: Deactivated successfully.
Nov 25 18:48:42 np0005535838 podman[230082]: 2025-11-25 23:48:42.32988036 +0000 UTC m=+0.145199949 container died 14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:48:42 np0005535838 systemd[1]: var-lib-containers-storage-overlay-20720e136399b588dd29189fca98e21e10f1bfaeb592dcc8b1216546f49adb48-merged.mount: Deactivated successfully.
Nov 25 18:48:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:42 np0005535838 podman[230082]: 2025-11-25 23:48:42.376983167 +0000 UTC m=+0.192302756 container remove 14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 25 18:48:42 np0005535838 systemd[1]: libpod-conmon-14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be.scope: Deactivated successfully.
Nov 25 18:48:42 np0005535838 podman[230196]: 2025-11-25 23:48:42.606068517 +0000 UTC m=+0.059460336 container create 24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 18:48:42 np0005535838 systemd[1]: Started libpod-conmon-24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716.scope.
Nov 25 18:48:42 np0005535838 podman[230196]: 2025-11-25 23:48:42.582975676 +0000 UTC m=+0.036367545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:48:42 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:48:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86c188d3969be041c82abc3f3be8ec5bf0e3760d31ef415585dbff19844995a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:48:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86c188d3969be041c82abc3f3be8ec5bf0e3760d31ef415585dbff19844995a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:48:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86c188d3969be041c82abc3f3be8ec5bf0e3760d31ef415585dbff19844995a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:48:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86c188d3969be041c82abc3f3be8ec5bf0e3760d31ef415585dbff19844995a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:48:42 np0005535838 podman[230196]: 2025-11-25 23:48:42.707817183 +0000 UTC m=+0.161209052 container init 24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:48:42 np0005535838 podman[230196]: 2025-11-25 23:48:42.720586222 +0000 UTC m=+0.173978061 container start 24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 18:48:42 np0005535838 podman[230196]: 2025-11-25 23:48:42.725087691 +0000 UTC m=+0.178479530 container attach 24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:48:43 np0005535838 python3.9[230269]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:48:43 np0005535838 systemd[1]: Reloading.
Nov 25 18:48:43 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:48:43 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:48:43 np0005535838 systemd[1]: Starting Create netns directory...
Nov 25 18:48:43 np0005535838 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 18:48:43 np0005535838 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 18:48:43 np0005535838 systemd[1]: Finished Create netns directory.
Nov 25 18:48:43 np0005535838 competent_shirley[230236]: {
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "osd_id": 2,
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "type": "bluestore"
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:    },
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "osd_id": 1,
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "type": "bluestore"
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:    },
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "osd_id": 0,
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:        "type": "bluestore"
Nov 25 18:48:43 np0005535838 competent_shirley[230236]:    }
Nov 25 18:48:43 np0005535838 competent_shirley[230236]: }
Nov 25 18:48:43 np0005535838 systemd[1]: libpod-24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716.scope: Deactivated successfully.
Nov 25 18:48:43 np0005535838 systemd[1]: libpod-24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716.scope: Consumed 1.059s CPU time.
Nov 25 18:48:43 np0005535838 podman[230196]: 2025-11-25 23:48:43.779780325 +0000 UTC m=+1.233172164 container died 24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 18:48:43 np0005535838 systemd[1]: var-lib-containers-storage-overlay-c86c188d3969be041c82abc3f3be8ec5bf0e3760d31ef415585dbff19844995a-merged.mount: Deactivated successfully.
Nov 25 18:48:43 np0005535838 podman[230196]: 2025-11-25 23:48:43.854651469 +0000 UTC m=+1.308043308 container remove 24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:48:43 np0005535838 systemd[1]: libpod-conmon-24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716.scope: Deactivated successfully.
Nov 25 18:48:43 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:48:43 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:48:43 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:48:43 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:48:43 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev a7d55164-c930-48df-8b5d-5193b4563b8c does not exist
Nov 25 18:48:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v558: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:44 np0005535838 python3.9[230553]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:48:44 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:48:44 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:48:44 np0005535838 podman[230677]: 2025-11-25 23:48:44.970098722 +0000 UTC m=+0.064602012 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 18:48:45 np0005535838 python3.9[230720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:48:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:48:45 np0005535838 python3.9[230846]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114524.6593208-437-214497841918838/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:48:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:46 np0005535838 python3.9[230998]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:48:47 np0005535838 python3.9[231150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:48:48 np0005535838 python3.9[231273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114526.8832836-462-61917942496976/.source.json _original_basename=.bu5w2_ts follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v560: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:48 np0005535838 python3.9[231425]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:48:50 np0005535838 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 25 18:48:51 np0005535838 python3.9[231853]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 25 18:48:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v562: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:52 np0005535838 python3.9[232005]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 18:48:53 np0005535838 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 18:48:53 np0005535838 python3.9[232158]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 18:48:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v563: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:48:55 np0005535838 python3[232337]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:48:56
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'backups', 'vms', '.mgr', 'images']
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:48:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v564: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:57 np0005535838 podman[232350]: 2025-11-25 23:48:57.165395247 +0000 UTC m=+1.324060681 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 25 18:48:57 np0005535838 podman[232405]: 2025-11-25 23:48:57.307029181 +0000 UTC m=+0.046559196 container create b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 18:48:57 np0005535838 podman[232405]: 2025-11-25 23:48:57.284544015 +0000 UTC m=+0.024074000 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 25 18:48:57 np0005535838 python3[232337]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 25 18:48:58 np0005535838 python3.9[232595]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:48:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:48:59 np0005535838 python3.9[232749]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:48:59 np0005535838 python3.9[232825]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:49:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:49:00 np0005535838 python3.9[232976]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764114539.8635983-550-268183740930661/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:01 np0005535838 python3.9[233052]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 18:49:01 np0005535838 systemd[1]: Reloading.
Nov 25 18:49:01 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:49:01 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:49:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:49:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v567: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:02 np0005535838 python3.9[233163]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:49:02 np0005535838 systemd[1]: Reloading.
Nov 25 18:49:02 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:49:02 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:49:02 np0005535838 systemd[1]: Starting multipathd container...
Nov 25 18:49:03 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:49:03 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc456fe93271dcda37c18dbbb2a83b50dbdcad4cd5a84a4efe9cd6b2652013a9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:03 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc456fe93271dcda37c18dbbb2a83b50dbdcad4cd5a84a4efe9cd6b2652013a9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:03 np0005535838 systemd[1]: Started /usr/bin/podman healthcheck run b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084.
Nov 25 18:49:03 np0005535838 podman[233202]: 2025-11-25 23:49:03.073360999 +0000 UTC m=+0.138948492 container init b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:49:03 np0005535838 multipathd[233218]: + sudo -E kolla_set_configs
Nov 25 18:49:03 np0005535838 podman[233202]: 2025-11-25 23:49:03.110321038 +0000 UTC m=+0.175908531 container start b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 18:49:03 np0005535838 podman[233202]: multipathd
Nov 25 18:49:03 np0005535838 systemd[1]: Started multipathd container.
Nov 25 18:49:03 np0005535838 multipathd[233218]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 18:49:03 np0005535838 multipathd[233218]: INFO:__main__:Validating config file
Nov 25 18:49:03 np0005535838 multipathd[233218]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 18:49:03 np0005535838 multipathd[233218]: INFO:__main__:Writing out command to execute
Nov 25 18:49:03 np0005535838 podman[233224]: 2025-11-25 23:49:03.217399806 +0000 UTC m=+0.090507739 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 18:49:03 np0005535838 multipathd[233218]: ++ cat /run_command
Nov 25 18:49:03 np0005535838 systemd[1]: b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084-31ede1e73f9bb0c4.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 18:49:03 np0005535838 systemd[1]: b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084-31ede1e73f9bb0c4.service: Failed with result 'exit-code'.
Nov 25 18:49:03 np0005535838 multipathd[233218]: + CMD='/usr/sbin/multipathd -d'
Nov 25 18:49:03 np0005535838 multipathd[233218]: + ARGS=
Nov 25 18:49:03 np0005535838 multipathd[233218]: + sudo kolla_copy_cacerts
Nov 25 18:49:03 np0005535838 multipathd[233218]: + [[ ! -n '' ]]
Nov 25 18:49:03 np0005535838 multipathd[233218]: + . kolla_extend_start
Nov 25 18:49:03 np0005535838 multipathd[233218]: Running command: '/usr/sbin/multipathd -d'
Nov 25 18:49:03 np0005535838 multipathd[233218]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 18:49:03 np0005535838 multipathd[233218]: + umask 0022
Nov 25 18:49:03 np0005535838 multipathd[233218]: + exec /usr/sbin/multipathd -d
Nov 25 18:49:03 np0005535838 multipathd[233218]: 3356.007641 | --------start up--------
Nov 25 18:49:03 np0005535838 multipathd[233218]: 3356.007674 | read /etc/multipath.conf
Nov 25 18:49:03 np0005535838 multipathd[233218]: 3356.018367 | path checkers start up
Nov 25 18:49:04 np0005535838 python3.9[233404]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:49:04 np0005535838 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 18:49:04 np0005535838 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 25 18:49:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v568: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:04 np0005535838 python3.9[233560]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.471023) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114545471057, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1042, "num_deletes": 250, "total_data_size": 1047556, "memory_usage": 1068456, "flush_reason": "Manual Compaction"}
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114545480043, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 643564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11105, "largest_seqno": 12146, "table_properties": {"data_size": 639580, "index_size": 1636, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10103, "raw_average_key_size": 20, "raw_value_size": 631040, "raw_average_value_size": 1249, "num_data_blocks": 75, "num_entries": 505, "num_filter_entries": 505, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764114442, "oldest_key_time": 1764114442, "file_creation_time": 1764114545, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 9067 microseconds, and 3279 cpu microseconds.
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.480087) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 643564 bytes OK
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.480105) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.481922) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.481934) EVENT_LOG_v1 {"time_micros": 1764114545481930, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.481948) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1042676, prev total WAL file size 1042676, number of live WAL files 2.
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.482376) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(628KB)], [29(5760KB)]
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114545482435, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 6542131, "oldest_snapshot_seqno": -1}
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3247 keys, 4842013 bytes, temperature: kUnknown
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114545506051, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 4842013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4819084, "index_size": 13732, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 75031, "raw_average_key_size": 23, "raw_value_size": 4759699, "raw_average_value_size": 1465, "num_data_blocks": 610, "num_entries": 3247, "num_filter_entries": 3247, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764114545, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.506321) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 4842013 bytes
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.507539) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 276.1 rd, 204.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 5.6 +0.0 blob) out(4.6 +0.0 blob), read-write-amplify(17.7) write-amplify(7.5) OK, records in: 3713, records dropped: 466 output_compression: NoCompression
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.507567) EVENT_LOG_v1 {"time_micros": 1764114545507555, "job": 12, "event": "compaction_finished", "compaction_time_micros": 23697, "compaction_time_cpu_micros": 12256, "output_level": 6, "num_output_files": 1, "total_output_size": 4842013, "num_input_records": 3713, "num_output_records": 3247, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114545507869, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114545509714, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.482279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.509755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.509757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.509759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.509760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:49:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.509762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:49:05 np0005535838 python3.9[233725]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:49:06 np0005535838 systemd[1]: Stopping multipathd container...
Nov 25 18:49:06 np0005535838 multipathd[233218]: 3358.858389 | exit (signal)
Nov 25 18:49:06 np0005535838 multipathd[233218]: 3358.859090 | --------shut down-------
Nov 25 18:49:06 np0005535838 systemd[1]: libpod-b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084.scope: Deactivated successfully.
Nov 25 18:49:06 np0005535838 podman[233729]: 2025-11-25 23:49:06.170808496 +0000 UTC m=+0.119071446 container died b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 18:49:06 np0005535838 systemd[1]: b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084-31ede1e73f9bb0c4.timer: Deactivated successfully.
Nov 25 18:49:06 np0005535838 systemd[1]: Stopped /usr/bin/podman healthcheck run b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084.
Nov 25 18:49:06 np0005535838 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084-userdata-shm.mount: Deactivated successfully.
Nov 25 18:49:06 np0005535838 systemd[1]: var-lib-containers-storage-overlay-cc456fe93271dcda37c18dbbb2a83b50dbdcad4cd5a84a4efe9cd6b2652013a9-merged.mount: Deactivated successfully.
Nov 25 18:49:06 np0005535838 podman[233729]: 2025-11-25 23:49:06.233548978 +0000 UTC m=+0.181811938 container cleanup b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 18:49:06 np0005535838 podman[233729]: multipathd
Nov 25 18:49:06 np0005535838 podman[233758]: multipathd
Nov 25 18:49:06 np0005535838 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 25 18:49:06 np0005535838 systemd[1]: Stopped multipathd container.
Nov 25 18:49:06 np0005535838 systemd[1]: Starting multipathd container...
Nov 25 18:49:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v569: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:06 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:49:06 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc456fe93271dcda37c18dbbb2a83b50dbdcad4cd5a84a4efe9cd6b2652013a9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:06 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc456fe93271dcda37c18dbbb2a83b50dbdcad4cd5a84a4efe9cd6b2652013a9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:06 np0005535838 systemd[1]: Started /usr/bin/podman healthcheck run b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084.
Nov 25 18:49:06 np0005535838 podman[233771]: 2025-11-25 23:49:06.437201535 +0000 UTC m=+0.110275853 container init b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:49:06 np0005535838 multipathd[233786]: + sudo -E kolla_set_configs
Nov 25 18:49:06 np0005535838 podman[233771]: 2025-11-25 23:49:06.470970469 +0000 UTC m=+0.144044757 container start b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 18:49:06 np0005535838 podman[233771]: multipathd
Nov 25 18:49:06 np0005535838 systemd[1]: Started multipathd container.
Nov 25 18:49:06 np0005535838 multipathd[233786]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 18:49:06 np0005535838 multipathd[233786]: INFO:__main__:Validating config file
Nov 25 18:49:06 np0005535838 multipathd[233786]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 18:49:06 np0005535838 multipathd[233786]: INFO:__main__:Writing out command to execute
Nov 25 18:49:06 np0005535838 multipathd[233786]: ++ cat /run_command
Nov 25 18:49:06 np0005535838 multipathd[233786]: + CMD='/usr/sbin/multipathd -d'
Nov 25 18:49:06 np0005535838 multipathd[233786]: + ARGS=
Nov 25 18:49:06 np0005535838 multipathd[233786]: + sudo kolla_copy_cacerts
Nov 25 18:49:06 np0005535838 podman[233793]: 2025-11-25 23:49:06.563040858 +0000 UTC m=+0.085205048 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:49:06 np0005535838 systemd[1]: b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084-41b6612c68b5e00b.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 18:49:06 np0005535838 systemd[1]: b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084-41b6612c68b5e00b.service: Failed with result 'exit-code'.
Nov 25 18:49:06 np0005535838 multipathd[233786]: Running command: '/usr/sbin/multipathd -d'
Nov 25 18:49:06 np0005535838 multipathd[233786]: + [[ ! -n '' ]]
Nov 25 18:49:06 np0005535838 multipathd[233786]: + . kolla_extend_start
Nov 25 18:49:06 np0005535838 multipathd[233786]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 18:49:06 np0005535838 multipathd[233786]: + umask 0022
Nov 25 18:49:06 np0005535838 multipathd[233786]: + exec /usr/sbin/multipathd -d
Nov 25 18:49:06 np0005535838 multipathd[233786]: 3359.309771 | --------start up--------
Nov 25 18:49:06 np0005535838 multipathd[233786]: 3359.309788 | read /etc/multipath.conf
Nov 25 18:49:06 np0005535838 multipathd[233786]: 3359.316411 | path checkers start up
Nov 25 18:49:07 np0005535838 python3.9[233976]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:08 np0005535838 podman[234100]: 2025-11-25 23:49:08.109782419 +0000 UTC m=+0.099215410 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 25 18:49:08 np0005535838 python3.9[234141]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 18:49:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v570: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:09 np0005535838 python3.9[234306]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 25 18:49:09 np0005535838 kernel: Key type psk registered
Nov 25 18:49:10 np0005535838 python3.9[234469]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:49:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:49:10 np0005535838 python3.9[234592]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114549.5238996-630-41314381482995/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:11 np0005535838 python3.9[234744]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v572: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:12 np0005535838 python3.9[234896]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:49:12 np0005535838 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 18:49:12 np0005535838 systemd[1]: Stopped Load Kernel Modules.
Nov 25 18:49:12 np0005535838 systemd[1]: Stopping Load Kernel Modules...
Nov 25 18:49:12 np0005535838 systemd[1]: Starting Load Kernel Modules...
Nov 25 18:49:12 np0005535838 systemd[1]: Finished Load Kernel Modules.
Nov 25 18:49:13 np0005535838 python3.9[235052]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 18:49:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v573: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:15 np0005535838 podman[235055]: 2025-11-25 23:49:15.263049485 +0000 UTC m=+0.079882267 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 18:49:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:49:15 np0005535838 systemd[1]: Reloading.
Nov 25 18:49:16 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:49:16 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:49:16 np0005535838 systemd[1]: Reloading.
Nov 25 18:49:16 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:49:16 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:49:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v574: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:16 np0005535838 systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 18:49:16 np0005535838 systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 18:49:16 np0005535838 lvm[235186]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 18:49:16 np0005535838 lvm[235186]: VG ceph_vg1 finished
Nov 25 18:49:16 np0005535838 lvm[235188]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 18:49:16 np0005535838 lvm[235188]: VG ceph_vg0 finished
Nov 25 18:49:16 np0005535838 lvm[235187]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 18:49:16 np0005535838 lvm[235187]: VG ceph_vg2 finished
Nov 25 18:49:16 np0005535838 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 18:49:16 np0005535838 systemd[1]: Starting man-db-cache-update.service...
Nov 25 18:49:16 np0005535838 systemd[1]: Reloading.
Nov 25 18:49:17 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:49:17 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:49:17 np0005535838 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 18:49:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:18 np0005535838 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 18:49:18 np0005535838 systemd[1]: Finished man-db-cache-update.service.
Nov 25 18:49:18 np0005535838 systemd[1]: man-db-cache-update.service: Consumed 1.817s CPU time.
Nov 25 18:49:18 np0005535838 systemd[1]: run-r6eb606b3add0414f97fec54969519a24.service: Deactivated successfully.
Nov 25 18:49:18 np0005535838 python3.9[236512]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:49:18 np0005535838 systemd[1]: Stopping Open-iSCSI...
Nov 25 18:49:18 np0005535838 iscsid[223940]: iscsid shutting down.
Nov 25 18:49:18 np0005535838 systemd[1]: iscsid.service: Deactivated successfully.
Nov 25 18:49:18 np0005535838 systemd[1]: Stopped Open-iSCSI.
Nov 25 18:49:18 np0005535838 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 18:49:18 np0005535838 systemd[1]: Starting Open-iSCSI...
Nov 25 18:49:18 np0005535838 systemd[1]: Started Open-iSCSI.
Nov 25 18:49:19 np0005535838 python3.9[236686]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 18:49:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v576: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:49:20 np0005535838 python3.9[236842]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:21 np0005535838 python3.9[236994]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 18:49:21 np0005535838 systemd[1]: Reloading.
Nov 25 18:49:22 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:49:22 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:49:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:23 np0005535838 python3.9[237178]: ansible-ansible.builtin.service_facts Invoked
Nov 25 18:49:23 np0005535838 network[237195]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 18:49:23 np0005535838 network[237196]: 'network-scripts' will be removed from distribution in near future.
Nov 25 18:49:23 np0005535838 network[237197]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 18:49:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v578: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:49:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:49:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:49:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:49:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:49:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:49:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:49:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v579: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:28 np0005535838 python3.9[237472]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:49:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:29 np0005535838 python3.9[237625]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:49:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v581: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:30 np0005535838 python3.9[237778]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:49:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:49:31 np0005535838 python3.9[237931]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:49:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:33 np0005535838 python3.9[238084]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:49:34 np0005535838 python3.9[238237]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:49:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v583: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:35 np0005535838 python3.9[238390]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:49:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:49:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v584: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:36 np0005535838 podman[238515]: 2025-11-25 23:49:36.863029227 +0000 UTC m=+0.094015792 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 18:49:37 np0005535838 python3.9[238560]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:49:38 np0005535838 python3.9[238714]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:38 np0005535838 podman[238715]: 2025-11-25 23:49:38.298196071 +0000 UTC m=+0.122126156 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 18:49:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:39 np0005535838 python3.9[238892]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:39 np0005535838 python3.9[239044]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:49:40 np0005535838 python3.9[239196]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:49:40.757 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:49:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:49:40.757 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:49:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:49:40.757 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:49:41 np0005535838 python3.9[239348]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:42 np0005535838 python3.9[239500]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v587: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:42 np0005535838 python3.9[239652]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:43 np0005535838 python3.9[239804]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v588: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:44 np0005535838 python3.9[240056]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:49:45 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev b405495c-904f-4c77-a287-2f6e033468ee does not exist
Nov 25 18:49:45 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev e5fdc532-761c-4705-bc09-e94e532c29b5 does not exist
Nov 25 18:49:45 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 9bb2f837-000c-427b-99fb-7f94f695e8cc does not exist
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:49:45 np0005535838 podman[240314]: 2025-11-25 23:49:45.407273276 +0000 UTC m=+0.074448913 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:49:45 np0005535838 python3.9[240288]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:49:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:49:45 np0005535838 podman[240476]: 2025-11-25 23:49:45.815276037 +0000 UTC m=+0.058502762 container create 197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:49:45 np0005535838 systemd[1]: Started libpod-conmon-197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234.scope.
Nov 25 18:49:45 np0005535838 podman[240476]: 2025-11-25 23:49:45.785811436 +0000 UTC m=+0.029038211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:49:45 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:49:45 np0005535838 podman[240476]: 2025-11-25 23:49:45.909046911 +0000 UTC m=+0.152273626 container init 197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_taussig, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 18:49:45 np0005535838 podman[240476]: 2025-11-25 23:49:45.923011331 +0000 UTC m=+0.166238056 container start 197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_taussig, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 18:49:45 np0005535838 podman[240476]: 2025-11-25 23:49:45.927414818 +0000 UTC m=+0.170641543 container attach 197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:49:45 np0005535838 jolly_taussig[240527]: 167 167
Nov 25 18:49:45 np0005535838 systemd[1]: libpod-197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234.scope: Deactivated successfully.
Nov 25 18:49:45 np0005535838 podman[240476]: 2025-11-25 23:49:45.932374599 +0000 UTC m=+0.175601314 container died 197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 18:49:45 np0005535838 systemd[1]: var-lib-containers-storage-overlay-f8c9f2b897a7b3bd15dc3bb5f48c009791c826ece9d5aa8f40167bda26822df4-merged.mount: Deactivated successfully.
Nov 25 18:49:45 np0005535838 podman[240476]: 2025-11-25 23:49:45.982506267 +0000 UTC m=+0.225732962 container remove 197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:49:45 np0005535838 systemd[1]: libpod-conmon-197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234.scope: Deactivated successfully.
Nov 25 18:49:46 np0005535838 python3.9[240587]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:46 np0005535838 podman[240593]: 2025-11-25 23:49:46.217950615 +0000 UTC m=+0.059353293 container create da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 18:49:46 np0005535838 systemd[1]: Started libpod-conmon-da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e.scope.
Nov 25 18:49:46 np0005535838 podman[240593]: 2025-11-25 23:49:46.195306455 +0000 UTC m=+0.036709133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:49:46 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:49:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ab1efb19df076ed233d07ec13bf33cf3cd0051e122c57e21da21a016375677/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ab1efb19df076ed233d07ec13bf33cf3cd0051e122c57e21da21a016375677/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ab1efb19df076ed233d07ec13bf33cf3cd0051e122c57e21da21a016375677/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ab1efb19df076ed233d07ec13bf33cf3cd0051e122c57e21da21a016375677/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ab1efb19df076ed233d07ec13bf33cf3cd0051e122c57e21da21a016375677/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:46 np0005535838 podman[240593]: 2025-11-25 23:49:46.3166396 +0000 UTC m=+0.158042278 container init da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nightingale, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 18:49:46 np0005535838 podman[240593]: 2025-11-25 23:49:46.325459323 +0000 UTC m=+0.166861971 container start da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 18:49:46 np0005535838 podman[240593]: 2025-11-25 23:49:46.331047252 +0000 UTC m=+0.172449900 container attach da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nightingale, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:49:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v589: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:46 np0005535838 python3.9[240765]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:47 np0005535838 exciting_nightingale[240627]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:49:47 np0005535838 exciting_nightingale[240627]: --> relative data size: 1.0
Nov 25 18:49:47 np0005535838 exciting_nightingale[240627]: --> All data devices are unavailable
Nov 25 18:49:47 np0005535838 systemd[1]: libpod-da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e.scope: Deactivated successfully.
Nov 25 18:49:47 np0005535838 systemd[1]: libpod-da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e.scope: Consumed 1.078s CPU time.
Nov 25 18:49:47 np0005535838 podman[240593]: 2025-11-25 23:49:47.549010842 +0000 UTC m=+1.390413480 container died da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nightingale, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:49:47 np0005535838 systemd[1]: var-lib-containers-storage-overlay-e2ab1efb19df076ed233d07ec13bf33cf3cd0051e122c57e21da21a016375677-merged.mount: Deactivated successfully.
Nov 25 18:49:47 np0005535838 podman[240593]: 2025-11-25 23:49:47.630443979 +0000 UTC m=+1.471846627 container remove da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:49:47 np0005535838 systemd[1]: libpod-conmon-da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e.scope: Deactivated successfully.
Nov 25 18:49:47 np0005535838 python3.9[240941]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:48 np0005535838 podman[241245]: 2025-11-25 23:49:48.399959338 +0000 UTC m=+0.059275912 container create e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:49:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v590: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:48 np0005535838 systemd[1]: Started libpod-conmon-e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f.scope.
Nov 25 18:49:48 np0005535838 python3.9[241230]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:48 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:49:48 np0005535838 podman[241245]: 2025-11-25 23:49:48.376420564 +0000 UTC m=+0.035737128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:49:48 np0005535838 podman[241245]: 2025-11-25 23:49:48.486387458 +0000 UTC m=+0.145704022 container init e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:49:48 np0005535838 podman[241245]: 2025-11-25 23:49:48.492406457 +0000 UTC m=+0.151723011 container start e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 18:49:48 np0005535838 podman[241245]: 2025-11-25 23:49:48.495677763 +0000 UTC m=+0.154994337 container attach e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 18:49:48 np0005535838 boring_mccarthy[241261]: 167 167
Nov 25 18:49:48 np0005535838 systemd[1]: libpod-e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f.scope: Deactivated successfully.
Nov 25 18:49:48 np0005535838 podman[241245]: 2025-11-25 23:49:48.497311486 +0000 UTC m=+0.156628030 container died e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:49:48 np0005535838 systemd[1]: var-lib-containers-storage-overlay-610c56506e33f9b124375e4b36562a252773b3a4637a567b66210558e6edec30-merged.mount: Deactivated successfully.
Nov 25 18:49:48 np0005535838 podman[241245]: 2025-11-25 23:49:48.531234576 +0000 UTC m=+0.190551120 container remove e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 18:49:48 np0005535838 systemd[1]: libpod-conmon-e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f.scope: Deactivated successfully.
Nov 25 18:49:48 np0005535838 podman[241348]: 2025-11-25 23:49:48.709419357 +0000 UTC m=+0.049975286 container create caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 18:49:48 np0005535838 systemd[1]: Started libpod-conmon-caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079.scope.
Nov 25 18:49:48 np0005535838 podman[241348]: 2025-11-25 23:49:48.683151701 +0000 UTC m=+0.023707690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:49:48 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:49:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c8eda9efeccb508495fa979b01194210606a9369ac41022d4fd7d05a15f8b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c8eda9efeccb508495fa979b01194210606a9369ac41022d4fd7d05a15f8b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c8eda9efeccb508495fa979b01194210606a9369ac41022d4fd7d05a15f8b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c8eda9efeccb508495fa979b01194210606a9369ac41022d4fd7d05a15f8b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:48 np0005535838 podman[241348]: 2025-11-25 23:49:48.818058325 +0000 UTC m=+0.158614244 container init caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lederberg, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:49:48 np0005535838 podman[241348]: 2025-11-25 23:49:48.833187606 +0000 UTC m=+0.173743505 container start caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:49:48 np0005535838 podman[241348]: 2025-11-25 23:49:48.838131087 +0000 UTC m=+0.178686986 container attach caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lederberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:49:49 np0005535838 python3.9[241457]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]: {
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:    "0": [
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:        {
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "devices": [
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "/dev/loop3"
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            ],
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_name": "ceph_lv0",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_size": "21470642176",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "name": "ceph_lv0",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "tags": {
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.cluster_name": "ceph",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.crush_device_class": "",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.encrypted": "0",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.osd_id": "0",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.type": "block",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.vdo": "0"
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            },
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "type": "block",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "vg_name": "ceph_vg0"
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:        }
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:    ],
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:    "1": [
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:        {
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "devices": [
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "/dev/loop4"
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            ],
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_name": "ceph_lv1",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_size": "21470642176",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "name": "ceph_lv1",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "tags": {
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.cluster_name": "ceph",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.crush_device_class": "",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.encrypted": "0",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.osd_id": "1",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.type": "block",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.vdo": "0"
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            },
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "type": "block",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "vg_name": "ceph_vg1"
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:        }
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:    ],
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:    "2": [
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:        {
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "devices": [
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "/dev/loop5"
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            ],
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_name": "ceph_lv2",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_size": "21470642176",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "name": "ceph_lv2",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "tags": {
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.cluster_name": "ceph",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.crush_device_class": "",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.encrypted": "0",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.osd_id": "2",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.type": "block",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:                "ceph.vdo": "0"
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            },
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "type": "block",
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:            "vg_name": "ceph_vg2"
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:        }
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]:    ]
Nov 25 18:49:49 np0005535838 sad_lederberg[241398]: }
Nov 25 18:49:49 np0005535838 systemd[1]: libpod-caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079.scope: Deactivated successfully.
Nov 25 18:49:49 np0005535838 podman[241348]: 2025-11-25 23:49:49.628696702 +0000 UTC m=+0.969252611 container died caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lederberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 18:49:49 np0005535838 systemd[1]: var-lib-containers-storage-overlay-f8c8eda9efeccb508495fa979b01194210606a9369ac41022d4fd7d05a15f8b3-merged.mount: Deactivated successfully.
Nov 25 18:49:49 np0005535838 podman[241348]: 2025-11-25 23:49:49.683506145 +0000 UTC m=+1.024062064 container remove caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 18:49:49 np0005535838 systemd[1]: libpod-conmon-caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079.scope: Deactivated successfully.
Nov 25 18:49:49 np0005535838 python3.9[241614]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:49:50 np0005535838 podman[241840]: 2025-11-25 23:49:50.293701672 +0000 UTC m=+0.046622476 container create bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 18:49:50 np0005535838 systemd[1]: Started libpod-conmon-bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59.scope.
Nov 25 18:49:50 np0005535838 podman[241840]: 2025-11-25 23:49:50.273493277 +0000 UTC m=+0.026414091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:49:50 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:49:50 np0005535838 podman[241840]: 2025-11-25 23:49:50.395806367 +0000 UTC m=+0.148727211 container init bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ritchie, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 18:49:50 np0005535838 podman[241840]: 2025-11-25 23:49:50.401794396 +0000 UTC m=+0.154715210 container start bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ritchie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 18:49:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v591: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:50 np0005535838 podman[241840]: 2025-11-25 23:49:50.40456404 +0000 UTC m=+0.157484884 container attach bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ritchie, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 18:49:50 np0005535838 gallant_ritchie[241887]: 167 167
Nov 25 18:49:50 np0005535838 systemd[1]: libpod-bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59.scope: Deactivated successfully.
Nov 25 18:49:50 np0005535838 podman[241840]: 2025-11-25 23:49:50.408149664 +0000 UTC m=+0.161070478 container died bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:49:50 np0005535838 systemd[1]: var-lib-containers-storage-overlay-380de1be2e0748c1d3ec35ff30b31956962cf8bb6e4e3fde3a81baea615deefa-merged.mount: Deactivated successfully.
Nov 25 18:49:50 np0005535838 podman[241840]: 2025-11-25 23:49:50.445555926 +0000 UTC m=+0.198476740 container remove bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ritchie, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:49:50 np0005535838 systemd[1]: libpod-conmon-bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59.scope: Deactivated successfully.
Nov 25 18:49:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:49:50 np0005535838 podman[241956]: 2025-11-25 23:49:50.595984111 +0000 UTC m=+0.037935526 container create 63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:49:50 np0005535838 python3.9[241948]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:49:50 np0005535838 systemd[1]: Started libpod-conmon-63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46.scope.
Nov 25 18:49:50 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:49:50 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1857217acaec6086b19605bba9dfcfe1608a73ddbcdd61338e9127b0669a118b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:50 np0005535838 podman[241956]: 2025-11-25 23:49:50.58088287 +0000 UTC m=+0.022834305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:49:50 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1857217acaec6086b19605bba9dfcfe1608a73ddbcdd61338e9127b0669a118b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:50 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1857217acaec6086b19605bba9dfcfe1608a73ddbcdd61338e9127b0669a118b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:50 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1857217acaec6086b19605bba9dfcfe1608a73ddbcdd61338e9127b0669a118b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:49:50 np0005535838 podman[241956]: 2025-11-25 23:49:50.688401729 +0000 UTC m=+0.130353164 container init 63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swanson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:49:50 np0005535838 podman[241956]: 2025-11-25 23:49:50.70049841 +0000 UTC m=+0.142449835 container start 63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:49:50 np0005535838 podman[241956]: 2025-11-25 23:49:50.703845118 +0000 UTC m=+0.145796543 container attach 63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swanson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 18:49:51 np0005535838 python3.9[242130]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]: {
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "osd_id": 2,
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "type": "bluestore"
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:    },
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "osd_id": 1,
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "type": "bluestore"
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:    },
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "osd_id": 0,
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:        "type": "bluestore"
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]:    }
Nov 25 18:49:51 np0005535838 jolly_swanson[241976]: }
Nov 25 18:49:51 np0005535838 systemd[1]: libpod-63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46.scope: Deactivated successfully.
Nov 25 18:49:51 np0005535838 systemd[1]: libpod-63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46.scope: Consumed 1.045s CPU time.
Nov 25 18:49:51 np0005535838 podman[241956]: 2025-11-25 23:49:51.74443648 +0000 UTC m=+1.186387925 container died 63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swanson, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:49:51 np0005535838 systemd[1]: var-lib-containers-storage-overlay-1857217acaec6086b19605bba9dfcfe1608a73ddbcdd61338e9127b0669a118b-merged.mount: Deactivated successfully.
Nov 25 18:49:51 np0005535838 podman[241956]: 2025-11-25 23:49:51.818480821 +0000 UTC m=+1.260432246 container remove 63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swanson, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:49:51 np0005535838 systemd[1]: libpod-conmon-63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46.scope: Deactivated successfully.
Nov 25 18:49:51 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:49:51 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:49:51 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:49:51 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:49:51 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 3f3de300-2790-40d3-8b7c-1bc201f3279a does not exist
Nov 25 18:49:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:52 np0005535838 python3.9[242372]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 18:49:52 np0005535838 systemd[1]: Reloading.
Nov 25 18:49:52 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:49:52 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:49:52 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:49:52 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:49:53 np0005535838 python3.9[242558]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:49:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:54 np0005535838 python3.9[242711]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:49:55 np0005535838 python3.9[242864]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:49:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:49:56
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', 'backups', '.mgr', 'volumes', 'cephfs.cephfs.data']
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:49:56 np0005535838 python3.9[243017]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:49:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v594: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:56 np0005535838 python3.9[243170]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:49:57 np0005535838 python3.9[243323]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:49:58 np0005535838 python3.9[243476]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:49:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:49:58 np0005535838 python3.9[243629]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 18:50:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v596: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:50:00 np0005535838 python3.9[243782]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:01 np0005535838 python3.9[243934]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:50:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:50:02 np0005535838 python3.9[244086]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:03 np0005535838 python3.9[244238]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:04 np0005535838 python3.9[244390]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v598: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:04 np0005535838 python3.9[244542]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:50:05 np0005535838 python3.9[244694]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:06 np0005535838 python3.9[244846]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:07 np0005535838 python3.9[244998]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:07 np0005535838 podman[244999]: 2025-11-25 23:50:07.127439581 +0000 UTC m=+0.074902315 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 18:50:07 np0005535838 python3.9[245172]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v600: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:09 np0005535838 podman[245197]: 2025-11-25 23:50:09.269079724 +0000 UTC m=+0.100314429 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 18:50:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v601: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:50:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:13 np0005535838 python3.9[245351]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 25 18:50:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:14 np0005535838 python3.9[245504]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 18:50:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:50:15 np0005535838 podman[245634]: 2025-11-25 23:50:15.845785475 +0000 UTC m=+0.085904177 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:50:16 np0005535838 python3.9[245681]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 18:50:16 np0005535838 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 18:50:16 np0005535838 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 18:50:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v604: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:17 np0005535838 systemd-logind[789]: New session 51 of user zuul.
Nov 25 18:50:17 np0005535838 systemd[1]: Started Session 51 of User zuul.
Nov 25 18:50:17 np0005535838 systemd-logind[789]: Session 51 logged out. Waiting for processes to exit.
Nov 25 18:50:17 np0005535838 systemd[1]: session-51.scope: Deactivated successfully.
Nov 25 18:50:17 np0005535838 systemd-logind[789]: Removed session 51.
Nov 25 18:50:18 np0005535838 python3.9[245868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:50:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:18 np0005535838 python3.9[245989]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114617.5388978-1249-116114722696709/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:19 np0005535838 python3.9[246139]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:50:19 np0005535838 python3.9[246215]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v606: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:50:20 np0005535838 python3.9[246365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:50:21 np0005535838 python3.9[246486]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114620.10584-1249-135969349598811/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:22 np0005535838 python3.9[246636]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:50:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v607: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:22 np0005535838 python3.9[246757]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114621.5166802-1249-55214467628208/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:23 np0005535838 python3.9[246907]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:50:24 np0005535838 python3.9[247028]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114622.9751227-1249-263740768742195/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v608: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:24 np0005535838 python3.9[247178]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:50:25 np0005535838 python3.9[247299]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114624.3093696-1249-7496938757179/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:50:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:50:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:50:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:50:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:50:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:50:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:50:26 np0005535838 python3.9[247452]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:50:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v609: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:27 np0005535838 python3.9[247604]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:50:28 np0005535838 python3.9[247756]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:50:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:28 np0005535838 python3.9[247908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:50:29 np0005535838 python3.9[248031]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764114628.2549345-1356-271612989107131/.source _original_basename=.l3qcxbuz follow=False checksum=8498dcb380c8ad9e7713c1be800fda9fd5956bfd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 25 18:50:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v611: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:30 np0005535838 python3.9[248183]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:50:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:50:31 np0005535838 python3.9[248336]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:50:31 np0005535838 python3.9[248457]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114630.7055583-1382-261293425673410/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:32 np0005535838 python3.9[248607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 18:50:33 np0005535838 python3.9[248728]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114632.0482116-1397-84951417737157/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 18:50:34 np0005535838 python3.9[248880]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 25 18:50:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v613: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:34 np0005535838 python3.9[249032]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 18:50:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:50:36 np0005535838 python3[249184]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 18:50:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:37 np0005535838 podman[249221]: 2025-11-25 23:50:37.242093999 +0000 UTC m=+0.071893766 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 18:50:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v615: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:50:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:50:40.757 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:50:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:50:40.758 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:50:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:50:40.758 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:50:41 np0005535838 podman[249258]: 2025-11-25 23:50:41.124664957 +0000 UTC m=+0.941673951 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 18:50:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v617: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v618: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:44 np0005535838 podman[249198]: 2025-11-25 23:50:44.634052478 +0000 UTC m=+8.523447550 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 25 18:50:44 np0005535838 podman[249342]: 2025-11-25 23:50:44.858460044 +0000 UTC m=+0.072333457 container create a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=edpm)
Nov 25 18:50:44 np0005535838 podman[249342]: 2025-11-25 23:50:44.824821992 +0000 UTC m=+0.038695475 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 25 18:50:44 np0005535838 python3[249184]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 25 18:50:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:50:45 np0005535838 python3.9[249532]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:50:46 np0005535838 podman[249559]: 2025-11-25 23:50:46.238618321 +0000 UTC m=+0.057648188 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 18:50:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:46 np0005535838 python3.9[249705]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 25 18:50:47 np0005535838 python3.9[249857]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 18:50:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:48 np0005535838 python3[250009]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 18:50:48 np0005535838 podman[250047]: 2025-11-25 23:50:48.857099077 +0000 UTC m=+0.051899556 container create 16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 18:50:48 np0005535838 podman[250047]: 2025-11-25 23:50:48.825789048 +0000 UTC m=+0.020589547 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 25 18:50:48 np0005535838 python3[250009]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 25 18:50:49 np0005535838 python3.9[250238]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:50:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v621: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:50 np0005535838 python3.9[250392]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:50:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:50:51 np0005535838 python3.9[250543]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764114650.5445268-1489-246375322935149/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 18:50:51 np0005535838 python3.9[250619]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 18:50:51 np0005535838 systemd[1]: Reloading.
Nov 25 18:50:51 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:50:51 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:50:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v622: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:52 np0005535838 python3.9[250829]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 18:50:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:50:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:50:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:50:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:50:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:50:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:50:52 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev a30c7d7d-19ef-409a-9846-f7d9f6673888 does not exist
Nov 25 18:50:52 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev e0d96717-f431-430c-bd71-8cc71f0c692d does not exist
Nov 25 18:50:52 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev b09e8da1-2ed8-47cd-8479-92d1b119d5c0 does not exist
Nov 25 18:50:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:50:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:50:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:50:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:50:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:50:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:50:52 np0005535838 systemd[1]: Reloading.
Nov 25 18:50:52 np0005535838 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 18:50:52 np0005535838 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 18:50:53 np0005535838 systemd[1]: Starting nova_compute container...
Nov 25 18:50:53 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:50:53 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:53 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:53 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:53 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:53 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:53 np0005535838 podman[250922]: 2025-11-25 23:50:53.324307696 +0000 UTC m=+0.136806066 container init 16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 18:50:53 np0005535838 podman[250922]: 2025-11-25 23:50:53.331806524 +0000 UTC m=+0.144304894 container start 16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm)
Nov 25 18:50:53 np0005535838 podman[250922]: nova_compute
Nov 25 18:50:53 np0005535838 nova_compute[250966]: + sudo -E kolla_set_configs
Nov 25 18:50:53 np0005535838 systemd[1]: Started nova_compute container.
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Validating config file
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Copying service configuration files
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Deleting /etc/ceph
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Creating directory /etc/ceph
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Writing out command to execute
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 18:50:53 np0005535838 nova_compute[250966]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 18:50:53 np0005535838 nova_compute[250966]: ++ cat /run_command
Nov 25 18:50:53 np0005535838 nova_compute[250966]: + CMD=nova-compute
Nov 25 18:50:53 np0005535838 nova_compute[250966]: + ARGS=
Nov 25 18:50:53 np0005535838 nova_compute[250966]: + sudo kolla_copy_cacerts
Nov 25 18:50:53 np0005535838 nova_compute[250966]: + [[ ! -n '' ]]
Nov 25 18:50:53 np0005535838 nova_compute[250966]: + . kolla_extend_start
Nov 25 18:50:53 np0005535838 nova_compute[250966]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 18:50:53 np0005535838 nova_compute[250966]: Running command: 'nova-compute'
Nov 25 18:50:53 np0005535838 nova_compute[250966]: + umask 0022
Nov 25 18:50:53 np0005535838 nova_compute[250966]: + exec nova-compute
Nov 25 18:50:53 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:50:53 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:50:53 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:50:53 np0005535838 podman[251086]: 2025-11-25 23:50:53.751447613 +0000 UTC m=+0.048591379 container create cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nash, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:50:53 np0005535838 systemd[1]: Started libpod-conmon-cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e.scope.
Nov 25 18:50:53 np0005535838 podman[251086]: 2025-11-25 23:50:53.724685674 +0000 UTC m=+0.021829460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:50:53 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:50:53 np0005535838 podman[251086]: 2025-11-25 23:50:53.841274423 +0000 UTC m=+0.138418169 container init cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nash, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:50:53 np0005535838 podman[251086]: 2025-11-25 23:50:53.847620131 +0000 UTC m=+0.144763857 container start cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:50:53 np0005535838 podman[251086]: 2025-11-25 23:50:53.854461882 +0000 UTC m=+0.151605628 container attach cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nash, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:50:53 np0005535838 thirsty_nash[251103]: 167 167
Nov 25 18:50:53 np0005535838 systemd[1]: libpod-cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e.scope: Deactivated successfully.
Nov 25 18:50:53 np0005535838 podman[251086]: 2025-11-25 23:50:53.865980657 +0000 UTC m=+0.163124393 container died cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nash, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 18:50:53 np0005535838 systemd[1]: var-lib-containers-storage-overlay-5f2b0926f6b5c5f6d31d623c149208697a32edeed18d46ab6cdb14ed445d12cd-merged.mount: Deactivated successfully.
Nov 25 18:50:53 np0005535838 podman[251086]: 2025-11-25 23:50:53.914515073 +0000 UTC m=+0.211658799 container remove cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nash, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 18:50:53 np0005535838 systemd[1]: libpod-conmon-cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e.scope: Deactivated successfully.
Nov 25 18:50:54 np0005535838 podman[251209]: 2025-11-25 23:50:54.084142597 +0000 UTC m=+0.042284551 container create c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:50:54 np0005535838 systemd[1]: Started libpod-conmon-c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470.scope.
Nov 25 18:50:54 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:50:54 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57f5afc1eded108a80f72303fdb2a30378b439ec1c063139e9eef5be35cbbcda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:54 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57f5afc1eded108a80f72303fdb2a30378b439ec1c063139e9eef5be35cbbcda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:54 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57f5afc1eded108a80f72303fdb2a30378b439ec1c063139e9eef5be35cbbcda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:54 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57f5afc1eded108a80f72303fdb2a30378b439ec1c063139e9eef5be35cbbcda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:54 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57f5afc1eded108a80f72303fdb2a30378b439ec1c063139e9eef5be35cbbcda/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:54 np0005535838 podman[251209]: 2025-11-25 23:50:54.062442173 +0000 UTC m=+0.020584137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:50:54 np0005535838 podman[251209]: 2025-11-25 23:50:54.17670968 +0000 UTC m=+0.134851634 container init c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cray, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:50:54 np0005535838 podman[251209]: 2025-11-25 23:50:54.186451198 +0000 UTC m=+0.144593162 container start c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cray, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:50:54 np0005535838 podman[251209]: 2025-11-25 23:50:54.189415006 +0000 UTC m=+0.147556960 container attach c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:50:54 np0005535838 python3.9[251268]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:50:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:55 np0005535838 python3.9[251432]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:50:55 np0005535838 stupefied_cray[251269]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:50:55 np0005535838 stupefied_cray[251269]: --> relative data size: 1.0
Nov 25 18:50:55 np0005535838 stupefied_cray[251269]: --> All data devices are unavailable
Nov 25 18:50:55 np0005535838 systemd[1]: libpod-c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470.scope: Deactivated successfully.
Nov 25 18:50:55 np0005535838 systemd[1]: libpod-c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470.scope: Consumed 1.030s CPU time.
Nov 25 18:50:55 np0005535838 podman[251209]: 2025-11-25 23:50:55.293500239 +0000 UTC m=+1.251642203 container died c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 18:50:55 np0005535838 systemd[1]: var-lib-containers-storage-overlay-57f5afc1eded108a80f72303fdb2a30378b439ec1c063139e9eef5be35cbbcda-merged.mount: Deactivated successfully.
Nov 25 18:50:55 np0005535838 podman[251209]: 2025-11-25 23:50:55.36109458 +0000 UTC m=+1.319236544 container remove c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:50:55 np0005535838 systemd[1]: libpod-conmon-c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470.scope: Deactivated successfully.
Nov 25 18:50:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:50:55 np0005535838 nova_compute[250966]: 2025-11-25 23:50:55.825 250990 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 18:50:55 np0005535838 nova_compute[250966]: 2025-11-25 23:50:55.826 250990 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 18:50:55 np0005535838 nova_compute[250966]: 2025-11-25 23:50:55.826 250990 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 18:50:55 np0005535838 nova_compute[250966]: 2025-11-25 23:50:55.826 250990 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 25 18:50:55 np0005535838 python3.9[251715]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 18:50:55 np0005535838 nova_compute[250966]: 2025-11-25 23:50:55.973 250990 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.008 250990 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.009 250990 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:50:56
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', 'cephfs.cephfs.meta', 'images', 'volumes', 'cephfs.cephfs.data']
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:50:56 np0005535838 podman[251764]: 2025-11-25 23:50:56.083664664 +0000 UTC m=+0.052907612 container create b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:50:56 np0005535838 systemd[1]: Started libpod-conmon-b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10.scope.
Nov 25 18:50:56 np0005535838 podman[251764]: 2025-11-25 23:50:56.062662398 +0000 UTC m=+0.031905346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:50:56 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:50:56 np0005535838 podman[251764]: 2025-11-25 23:50:56.186106869 +0000 UTC m=+0.155349797 container init b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:50:56 np0005535838 podman[251764]: 2025-11-25 23:50:56.195311143 +0000 UTC m=+0.164554051 container start b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:50:56 np0005535838 podman[251764]: 2025-11-25 23:50:56.198108347 +0000 UTC m=+0.167351255 container attach b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:50:56 np0005535838 blissful_shaw[251802]: 167 167
Nov 25 18:50:56 np0005535838 systemd[1]: libpod-b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10.scope: Deactivated successfully.
Nov 25 18:50:56 np0005535838 podman[251764]: 2025-11-25 23:50:56.203618733 +0000 UTC m=+0.172861671 container died b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:50:56 np0005535838 systemd[1]: var-lib-containers-storage-overlay-3ef92f02c87643eb05c6808c0ecc5f42414cbe515ff9f03d49c3bfa0ac50d532-merged.mount: Deactivated successfully.
Nov 25 18:50:56 np0005535838 podman[251764]: 2025-11-25 23:50:56.244553807 +0000 UTC m=+0.213796725 container remove b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 25 18:50:56 np0005535838 systemd[1]: libpod-conmon-b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10.scope: Deactivated successfully.
Nov 25 18:50:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v624: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:56 np0005535838 podman[251876]: 2025-11-25 23:50:56.438151847 +0000 UTC m=+0.047602762 container create 10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:50:56 np0005535838 systemd[1]: Started libpod-conmon-10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634.scope.
Nov 25 18:50:56 np0005535838 podman[251876]: 2025-11-25 23:50:56.414964563 +0000 UTC m=+0.024415498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:50:56 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:50:56 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f22f2c9a6bd2cabd883c3e1a1d494855346aa982b9e6556d6fd3918085d841/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:56 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f22f2c9a6bd2cabd883c3e1a1d494855346aa982b9e6556d6fd3918085d841/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:56 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f22f2c9a6bd2cabd883c3e1a1d494855346aa982b9e6556d6fd3918085d841/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:56 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f22f2c9a6bd2cabd883c3e1a1d494855346aa982b9e6556d6fd3918085d841/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:56 np0005535838 podman[251876]: 2025-11-25 23:50:56.539444901 +0000 UTC m=+0.148895836 container init 10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:50:56 np0005535838 podman[251876]: 2025-11-25 23:50:56.552941259 +0000 UTC m=+0.162392144 container start 10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 18:50:56 np0005535838 podman[251876]: 2025-11-25 23:50:56.556684207 +0000 UTC m=+0.166135122 container attach 10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.689 250990 INFO nova.virt.driver [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.893 250990 INFO nova.compute.provider_config [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.918 250990 DEBUG oslo_concurrency.lockutils [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.919 250990 DEBUG oslo_concurrency.lockutils [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.920 250990 DEBUG oslo_concurrency.lockutils [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.920 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.920 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.920 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.921 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.921 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.921 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.922 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.922 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.922 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.922 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.923 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.923 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.923 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.924 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.924 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.924 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.924 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.925 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.925 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.925 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.926 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.926 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.926 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.926 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.927 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.927 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.927 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.928 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.928 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.928 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.929 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.929 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.929 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.929 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.930 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.930 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.930 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.930 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.931 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.931 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.931 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.932 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.932 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.932 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.933 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.933 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.933 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.933 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.934 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.934 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.934 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.935 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.935 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.935 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.935 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.936 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.936 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.936 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.936 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.937 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.937 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.937 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.937 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.938 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.938 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.938 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.938 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.939 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.939 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.939 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.940 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.940 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.940 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.940 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.941 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.941 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.941 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.942 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.942 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.942 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.943 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.943 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.943 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.943 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.944 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.944 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.944 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.944 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.945 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.945 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.945 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.946 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.946 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.946 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.946 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.947 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.947 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.947 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.947 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.948 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.948 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.948 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.948 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.948 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.948 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.949 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.949 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.949 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.949 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.949 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:56 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.950 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.950 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.950 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.950 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.950 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.951 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.951 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.951 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.951 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.951 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.952 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.952 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.952 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.952 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.952 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.953 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.953 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.953 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.953 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.953 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.954 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.954 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.954 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.954 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.954 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.955 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.955 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.955 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.955 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.955 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.955 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.956 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.956 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.956 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.956 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.956 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.957 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.957 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.957 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.957 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.957 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.958 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.958 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.958 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.958 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.958 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.959 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.959 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.959 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.959 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.959 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.959 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.960 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.960 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.960 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.960 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.960 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.960 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.960 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.962 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.962 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.962 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.962 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.962 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.962 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.962 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.964 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.964 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.964 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.964 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.964 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.964 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.964 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.966 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.966 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.966 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.966 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.966 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.966 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.966 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.968 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.968 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.968 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.968 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.968 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.968 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.968 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.970 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.970 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.970 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.970 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.970 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.970 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.970 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.972 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.972 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.972 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.972 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.972 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.972 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.972 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.974 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.974 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.974 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.974 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.974 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.974 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.974 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.976 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.976 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.976 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.976 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.976 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.976 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.976 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.978 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.978 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.978 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.978 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.978 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.978 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.978 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.980 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.980 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.980 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.980 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.980 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.980 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.980 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.983 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.983 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.983 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.983 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.983 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.983 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.983 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.985 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.985 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.985 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.985 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.985 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.985 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.986 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.986 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.986 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.986 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.986 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.986 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.986 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.987 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.987 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.987 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.999 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.999 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.999 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.999 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:56.999 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.000 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.000 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.000 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.000 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.000 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.000 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.001 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.001 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.001 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.001 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.001 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.002 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.002 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.002 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.002 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.002 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.003 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.003 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.003 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.003 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.004 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.004 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.004 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.004 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.004 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.005 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.005 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.005 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.005 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.005 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.006 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.006 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.006 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.006 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.006 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.007 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.007 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.007 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.007 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.007 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.008 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.008 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.008 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.008 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.008 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.009 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.009 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.009 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.009 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.009 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.010 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.010 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.010 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.010 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.010 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.011 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.011 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.011 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.011 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.011 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.012 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.012 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.012 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.012 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.013 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.013 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.013 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.013 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.013 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.014 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.014 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.014 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.014 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.015 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.015 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.015 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.015 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.015 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.016 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.016 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.016 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.016 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.016 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.017 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.017 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.017 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.017 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.017 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.018 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.018 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.018 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.018 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.018 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.019 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.019 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.019 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.019 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.019 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.020 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.020 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.020 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.020 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.020 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.021 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.021 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.021 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.021 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.021 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.022 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.022 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.022 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.022 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.022 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.023 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.023 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.023 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.023 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.023 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.024 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.024 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.024 250990 WARNING oslo_config.cfg [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 18:50:57 np0005535838 nova_compute[250966]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 18:50:57 np0005535838 nova_compute[250966]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 18:50:57 np0005535838 nova_compute[250966]: and ``live_migration_inbound_addr`` respectively.
Nov 25 18:50:57 np0005535838 nova_compute[250966]: ).  Its value may be silently ignored in the future.#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.024 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.025 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.025 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.025 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.025 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.025 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.026 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.026 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.026 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.026 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.026 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.027 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.027 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.027 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.027 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.027 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.028 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.028 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.028 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rbd_secret_uuid        = 101922db-575f-58e2-980f-928050464f69 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.028 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.028 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.029 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.029 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.029 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.029 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.029 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.030 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.030 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.030 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.030 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.030 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.031 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.031 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.031 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.031 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.031 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.032 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.032 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.032 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.032 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.032 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.033 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.033 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.033 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.033 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.033 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.034 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.034 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.034 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.035 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.035 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.035 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.036 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.036 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.036 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.037 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.037 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.037 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.038 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.038 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.038 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.039 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.039 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.039 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.039 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.039 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.039 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.040 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.040 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.040 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.040 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.040 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.040 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.041 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.041 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.041 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.041 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.041 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.041 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.041 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.042 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.042 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.042 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.042 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.042 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.042 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.043 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.043 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.043 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.043 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.043 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.043 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.043 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.045 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.045 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.045 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.045 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.045 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.045 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.045 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.046 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.046 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.046 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.046 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.046 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.046 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.046 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.047 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.047 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.047 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.047 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.047 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.047 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.047 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.048 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.048 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.048 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.048 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.048 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.048 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.048 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.050 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.050 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.050 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.050 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.050 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.052 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.052 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.052 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.052 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.052 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.052 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.052 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.053 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.053 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.053 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.053 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.053 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.053 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.053 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.054 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.054 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.054 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.054 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.054 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.054 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.054 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.055 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.055 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.055 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.055 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.055 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.055 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.055 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.056 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.056 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.056 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.056 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.056 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.056 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.057 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.057 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.057 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.057 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.057 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.057 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.057 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.058 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.058 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.058 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.058 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.058 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.058 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.059 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.059 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.059 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.059 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.059 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.059 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.059 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.060 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.060 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.060 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.060 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.060 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.060 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.060 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.062 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.062 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.062 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.062 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.062 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.062 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.062 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.063 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.063 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.063 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.063 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.063 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.063 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.063 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.064 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.064 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.064 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.064 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.064 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.064 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.064 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.065 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.065 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.065 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.065 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.065 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.065 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.065 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.066 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.066 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.066 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.066 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.066 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.066 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.067 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.067 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.067 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.067 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.067 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.067 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.068 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.068 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.068 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.068 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.068 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.068 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.068 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.069 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.069 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.069 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.069 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.069 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.069 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.069 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.070 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.070 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.070 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.070 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.070 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.070 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.070 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.071 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.071 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.071 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.071 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.071 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.072 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.072 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.072 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.072 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.072 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.072 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.072 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.073 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.073 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.073 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.073 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.073 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.073 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.074 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.074 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.074 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.074 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.074 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.074 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.074 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.075 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.075 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.075 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.075 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.075 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.075 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.075 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.077 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.077 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.077 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.077 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.077 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.077 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.077 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.078 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.078 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.078 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.078 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.078 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.078 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.078 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.079 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.079 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.079 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.079 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.079 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.079 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.079 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.080 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.080 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.080 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.080 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.080 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.080 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.080 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.081 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.081 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.081 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.081 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.081 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.081 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.081 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.082 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.082 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.082 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.082 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.082 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.082 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.082 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.083 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.083 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.083 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.083 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.083 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.083 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.083 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.085 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.085 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.085 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.085 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.085 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.085 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.085 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.086 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.086 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.086 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.086 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.086 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.086 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.086 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.087 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.087 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.087 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.087 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.087 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.087 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.087 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.088 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.088 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.088 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.088 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.088 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.088 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.088 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.090 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.090 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.090 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.090 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.090 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.090 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.091 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.091 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.091 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.091 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.091 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.092 250990 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.124 250990 DEBUG nova.virt.libvirt.host [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.125 250990 DEBUG nova.virt.libvirt.host [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.125 250990 DEBUG nova.virt.libvirt.host [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.125 250990 DEBUG nova.virt.libvirt.host [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 25 18:50:57 np0005535838 python3.9[251973]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 18:50:57 np0005535838 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 18:50:57 np0005535838 systemd[1]: Started libvirt QEMU daemon.
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.210 250990 DEBUG nova.virt.libvirt.host [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f64b8f377c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.212 250990 DEBUG nova.virt.libvirt.host [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f64b8f377c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.213 250990 INFO nova.virt.libvirt.driver [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 25 18:50:57 np0005535838 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.235 250990 WARNING nova.virt.libvirt.driver [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 25 18:50:57 np0005535838 nova_compute[250966]: 2025-11-25 23:50:57.235 250990 DEBUG nova.virt.libvirt.volume.mount [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]: {
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:    "0": [
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:        {
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "devices": [
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "/dev/loop3"
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            ],
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_name": "ceph_lv0",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_size": "21470642176",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "name": "ceph_lv0",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "tags": {
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.cluster_name": "ceph",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.crush_device_class": "",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.encrypted": "0",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.osd_id": "0",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.type": "block",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.vdo": "0"
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            },
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "type": "block",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "vg_name": "ceph_vg0"
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:        }
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:    ],
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:    "1": [
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:        {
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "devices": [
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "/dev/loop4"
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            ],
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_name": "ceph_lv1",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_size": "21470642176",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "name": "ceph_lv1",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "tags": {
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.cluster_name": "ceph",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.crush_device_class": "",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.encrypted": "0",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.osd_id": "1",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.type": "block",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.vdo": "0"
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            },
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "type": "block",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "vg_name": "ceph_vg1"
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:        }
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:    ],
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:    "2": [
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:        {
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "devices": [
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "/dev/loop5"
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            ],
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_name": "ceph_lv2",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_size": "21470642176",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "name": "ceph_lv2",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "tags": {
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.cluster_name": "ceph",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.crush_device_class": "",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.encrypted": "0",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.osd_id": "2",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.type": "block",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:                "ceph.vdo": "0"
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            },
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "type": "block",
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:            "vg_name": "ceph_vg2"
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:        }
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]:    ]
Nov 25 18:50:57 np0005535838 brave_ramanujan[251893]: }
Nov 25 18:50:57 np0005535838 systemd[1]: libpod-10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634.scope: Deactivated successfully.
Nov 25 18:50:57 np0005535838 podman[252050]: 2025-11-25 23:50:57.341593674 +0000 UTC m=+0.023228127 container died 10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 18:50:57 np0005535838 systemd[1]: var-lib-containers-storage-overlay-80f22f2c9a6bd2cabd883c3e1a1d494855346aa982b9e6556d6fd3918085d841-merged.mount: Deactivated successfully.
Nov 25 18:50:57 np0005535838 podman[252050]: 2025-11-25 23:50:57.398521712 +0000 UTC m=+0.080156135 container remove 10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:50:57 np0005535838 systemd[1]: libpod-conmon-10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634.scope: Deactivated successfully.
Nov 25 18:50:58 np0005535838 podman[252366]: 2025-11-25 23:50:58.048452962 +0000 UTC m=+0.044229183 container create 54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:50:58 np0005535838 python3.9[252319]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 18:50:58 np0005535838 systemd[1]: Started libpod-conmon-54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d.scope.
Nov 25 18:50:58 np0005535838 systemd[1]: Stopping nova_compute container...
Nov 25 18:50:58 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:50:58 np0005535838 podman[252366]: 2025-11-25 23:50:58.031874043 +0000 UTC m=+0.027650284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:50:58 np0005535838 podman[252366]: 2025-11-25 23:50:58.135534329 +0000 UTC m=+0.131310570 container init 54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 18:50:58 np0005535838 podman[252366]: 2025-11-25 23:50:58.141882027 +0000 UTC m=+0.137658248 container start 54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:50:58 np0005535838 agitated_sanderson[252386]: 167 167
Nov 25 18:50:58 np0005535838 systemd[1]: libpod-54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d.scope: Deactivated successfully.
Nov 25 18:50:58 np0005535838 podman[252366]: 2025-11-25 23:50:58.148136283 +0000 UTC m=+0.143912504 container attach 54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:50:58 np0005535838 podman[252366]: 2025-11-25 23:50:58.148729399 +0000 UTC m=+0.144505650 container died 54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:50:58 np0005535838 systemd[1]: var-lib-containers-storage-overlay-c58f75b4eb28cf5255853ee5ed7e68d28e3c974912bd34e78e536b2bd6e54d86-merged.mount: Deactivated successfully.
Nov 25 18:50:58 np0005535838 podman[252366]: 2025-11-25 23:50:58.190560327 +0000 UTC m=+0.186336568 container remove 54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 18:50:58 np0005535838 nova_compute[250966]: 2025-11-25 23:50:58.191 250990 DEBUG oslo_concurrency.lockutils [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 18:50:58 np0005535838 nova_compute[250966]: 2025-11-25 23:50:58.192 250990 DEBUG oslo_concurrency.lockutils [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 18:50:58 np0005535838 nova_compute[250966]: 2025-11-25 23:50:58.192 250990 DEBUG oslo_concurrency.lockutils [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 18:50:58 np0005535838 systemd[1]: libpod-conmon-54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d.scope: Deactivated successfully.
Nov 25 18:50:58 np0005535838 podman[252423]: 2025-11-25 23:50:58.372933089 +0000 UTC m=+0.036990711 container create 8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 25 18:50:58 np0005535838 systemd[1]: Started libpod-conmon-8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b.scope.
Nov 25 18:50:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v625: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:50:58 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:50:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2992ad7e4d325d4135aa23a9e31fd72500878f1a53e2d98b6d6d67c1210d50c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:58 np0005535838 podman[252423]: 2025-11-25 23:50:58.356642068 +0000 UTC m=+0.020699710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:50:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2992ad7e4d325d4135aa23a9e31fd72500878f1a53e2d98b6d6d67c1210d50c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2992ad7e4d325d4135aa23a9e31fd72500878f1a53e2d98b6d6d67c1210d50c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:58 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2992ad7e4d325d4135aa23a9e31fd72500878f1a53e2d98b6d6d67c1210d50c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:50:58 np0005535838 podman[252423]: 2025-11-25 23:50:58.468726897 +0000 UTC m=+0.132784559 container init 8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:50:58 np0005535838 podman[252423]: 2025-11-25 23:50:58.47561886 +0000 UTC m=+0.139676482 container start 8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 18:50:58 np0005535838 podman[252423]: 2025-11-25 23:50:58.478594959 +0000 UTC m=+0.142652591 container attach 8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:50:58 np0005535838 systemd[1]: libpod-16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a.scope: Deactivated successfully.
Nov 25 18:50:58 np0005535838 systemd[1]: libpod-16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a.scope: Consumed 3.114s CPU time.
Nov 25 18:50:58 np0005535838 virtqemud[251995]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 18:50:58 np0005535838 virtqemud[251995]: hostname: compute-0
Nov 25 18:50:58 np0005535838 virtqemud[251995]: End of file while reading data: Input/output error
Nov 25 18:50:58 np0005535838 podman[252388]: 2025-11-25 23:50:58.621553396 +0000 UTC m=+0.517035789 container died 16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:50:58 np0005535838 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a-userdata-shm.mount: Deactivated successfully.
Nov 25 18:50:58 np0005535838 systemd[1]: var-lib-containers-storage-overlay-a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045-merged.mount: Deactivated successfully.
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]: {
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "osd_id": 2,
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "type": "bluestore"
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:    },
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "osd_id": 1,
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "type": "bluestore"
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:    },
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "osd_id": 0,
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:        "type": "bluestore"
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]:    }
Nov 25 18:50:59 np0005535838 zen_wilbur[252439]: }
Nov 25 18:50:59 np0005535838 systemd[1]: libpod-8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b.scope: Deactivated successfully.
Nov 25 18:50:59 np0005535838 systemd[1]: libpod-8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b.scope: Consumed 1.042s CPU time.
Nov 25 18:50:59 np0005535838 podman[252423]: 2025-11-25 23:50:59.534292319 +0000 UTC m=+1.198349981 container died 8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 18:51:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:00 np0005535838 podman[252388]: 2025-11-25 23:51:00.524577977 +0000 UTC m=+2.420060410 container cleanup 16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:51:00 np0005535838 podman[252388]: nova_compute
Nov 25 18:51:00 np0005535838 systemd[1]: var-lib-containers-storage-overlay-e2992ad7e4d325d4135aa23a9e31fd72500878f1a53e2d98b6d6d67c1210d50c-merged.mount: Deactivated successfully.
Nov 25 18:51:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:51:00 np0005535838 podman[252423]: 2025-11-25 23:51:00.585718107 +0000 UTC m=+2.249775729 container remove 8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 18:51:00 np0005535838 podman[252499]: nova_compute
Nov 25 18:51:00 np0005535838 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 25 18:51:00 np0005535838 systemd[1]: Stopped nova_compute container.
Nov 25 18:51:00 np0005535838 systemd[1]: Starting nova_compute container...
Nov 25 18:51:00 np0005535838 systemd[1]: libpod-conmon-8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b.scope: Deactivated successfully.
Nov 25 18:51:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:51:00 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:51:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:51:00 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:51:00 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev d670e01e-9f3f-4218-a490-20b1a9304fca does not exist
Nov 25 18:51:00 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:51:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 18:51:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 18:51:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 18:51:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 18:51:00 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 18:51:00 np0005535838 podman[252512]: 2025-11-25 23:51:00.736673167 +0000 UTC m=+0.097799563 container init 16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251118)
Nov 25 18:51:00 np0005535838 podman[252512]: 2025-11-25 23:51:00.743035465 +0000 UTC m=+0.104161841 container start 16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 18:51:00 np0005535838 podman[252512]: nova_compute
Nov 25 18:51:00 np0005535838 nova_compute[252550]: + sudo -E kolla_set_configs
Nov 25 18:51:00 np0005535838 systemd[1]: Started nova_compute container.
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Validating config file
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Copying service configuration files
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Deleting /etc/ceph
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Creating directory /etc/ceph
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Writing out command to execute
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 18:51:00 np0005535838 nova_compute[252550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 18:51:00 np0005535838 nova_compute[252550]: ++ cat /run_command
Nov 25 18:51:00 np0005535838 nova_compute[252550]: + CMD=nova-compute
Nov 25 18:51:00 np0005535838 nova_compute[252550]: + ARGS=
Nov 25 18:51:00 np0005535838 nova_compute[252550]: + sudo kolla_copy_cacerts
Nov 25 18:51:00 np0005535838 nova_compute[252550]: + [[ ! -n '' ]]
Nov 25 18:51:00 np0005535838 nova_compute[252550]: + . kolla_extend_start
Nov 25 18:51:00 np0005535838 nova_compute[252550]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 18:51:00 np0005535838 nova_compute[252550]: Running command: 'nova-compute'
Nov 25 18:51:00 np0005535838 nova_compute[252550]: + umask 0022
Nov 25 18:51:00 np0005535838 nova_compute[252550]: + exec nova-compute
Nov 25 18:51:01 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:51:01 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:51:01 np0005535838 python3.9[252740]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:51:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:51:01 np0005535838 systemd[1]: Started libpod-conmon-a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07.scope.
Nov 25 18:51:01 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:51:01 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8beb674a9b7ae1a559d086947d6ffd10572bb9cc13e26bf5c502719b6d834c75/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 25 18:51:01 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8beb674a9b7ae1a559d086947d6ffd10572bb9cc13e26bf5c502719b6d834c75/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 18:51:01 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8beb674a9b7ae1a559d086947d6ffd10572bb9cc13e26bf5c502719b6d834c75/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 25 18:51:01 np0005535838 podman[252767]: 2025-11-25 23:51:01.929262553 +0000 UTC m=+0.119501286 container init a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 18:51:01 np0005535838 podman[252767]: 2025-11-25 23:51:01.941612101 +0000 UTC m=+0.131850814 container start a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 25 18:51:01 np0005535838 python3.9[252740]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 25 18:51:02 np0005535838 nova_compute_init[252789]: INFO:nova_statedir:Applying nova statedir ownership
Nov 25 18:51:02 np0005535838 nova_compute_init[252789]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 25 18:51:02 np0005535838 nova_compute_init[252789]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 25 18:51:02 np0005535838 nova_compute_init[252789]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 25 18:51:02 np0005535838 nova_compute_init[252789]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 25 18:51:02 np0005535838 nova_compute_init[252789]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 25 18:51:02 np0005535838 nova_compute_init[252789]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 25 18:51:02 np0005535838 nova_compute_init[252789]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 25 18:51:02 np0005535838 nova_compute_init[252789]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 25 18:51:02 np0005535838 nova_compute_init[252789]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 25 18:51:02 np0005535838 nova_compute_init[252789]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 25 18:51:02 np0005535838 nova_compute_init[252789]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 25 18:51:02 np0005535838 nova_compute_init[252789]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 25 18:51:02 np0005535838 nova_compute_init[252789]: INFO:nova_statedir:Nova statedir ownership complete
Nov 25 18:51:02 np0005535838 systemd[1]: libpod-a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07.scope: Deactivated successfully.
Nov 25 18:51:02 np0005535838 podman[252790]: 2025-11-25 23:51:02.0763294 +0000 UTC m=+0.094907305 container died a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 25 18:51:02 np0005535838 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07-userdata-shm.mount: Deactivated successfully.
Nov 25 18:51:02 np0005535838 systemd[1]: var-lib-containers-storage-overlay-8beb674a9b7ae1a559d086947d6ffd10572bb9cc13e26bf5c502719b6d834c75-merged.mount: Deactivated successfully.
Nov 25 18:51:02 np0005535838 podman[252801]: 2025-11-25 23:51:02.136833324 +0000 UTC m=+0.066560345 container cleanup a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 18:51:02 np0005535838 systemd[1]: libpod-conmon-a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07.scope: Deactivated successfully.
Nov 25 18:51:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:02 np0005535838 nova_compute[252550]: 2025-11-25 23:51:02.659 252558 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 18:51:02 np0005535838 nova_compute[252550]: 2025-11-25 23:51:02.659 252558 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 18:51:02 np0005535838 nova_compute[252550]: 2025-11-25 23:51:02.659 252558 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 18:51:02 np0005535838 nova_compute[252550]: 2025-11-25 23:51:02.660 252558 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 25 18:51:02 np0005535838 systemd[1]: session-50.scope: Deactivated successfully.
Nov 25 18:51:02 np0005535838 systemd[1]: session-50.scope: Consumed 2min 35.537s CPU time.
Nov 25 18:51:02 np0005535838 systemd-logind[789]: Session 50 logged out. Waiting for processes to exit.
Nov 25 18:51:02 np0005535838 systemd-logind[789]: Removed session 50.
Nov 25 18:51:02 np0005535838 nova_compute[252550]: 2025-11-25 23:51:02.785 252558 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:51:02 np0005535838 nova_compute[252550]: 2025-11-25 23:51:02.812 252558 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:51:02 np0005535838 nova_compute[252550]: 2025-11-25 23:51:02.812 252558 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.300 252558 INFO nova.virt.driver [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.391 252558 INFO nova.compute.provider_config [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.564 252558 DEBUG oslo_concurrency.lockutils [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.564 252558 DEBUG oslo_concurrency.lockutils [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.565 252558 DEBUG oslo_concurrency.lockutils [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.565 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.566 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.566 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.566 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.566 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.567 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.567 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.567 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.568 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.568 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.568 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.569 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.569 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.569 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.569 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.570 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.570 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.570 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.571 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.571 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.572 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.572 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.572 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.572 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.573 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.573 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.574 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.574 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.574 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.575 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.575 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.575 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.576 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.576 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.576 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.577 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.577 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.577 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.578 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.578 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.578 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.579 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.579 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.580 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.580 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.581 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.581 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.581 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.582 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.582 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.582 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.583 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.583 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.584 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.584 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.585 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.585 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.585 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.586 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.586 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.586 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.587 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.587 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.587 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.588 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.588 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.588 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.589 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.589 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.589 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.589 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.590 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.590 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.590 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.591 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.591 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.592 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.592 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.592 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.593 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.593 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.593 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.594 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.594 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.594 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.595 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.595 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.595 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.596 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.596 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.597 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.597 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.598 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.598 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.598 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.599 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.599 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.599 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.600 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.600 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.600 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.601 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.601 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.601 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.602 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.602 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.602 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.603 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.603 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.603 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.604 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.604 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.604 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.604 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.604 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.605 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.605 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.605 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.605 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.606 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.606 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.606 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.606 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.606 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.607 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.607 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.607 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.607 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.608 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.608 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.608 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.608 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.608 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.609 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.609 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.609 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.609 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.609 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.610 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.610 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.610 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.610 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.610 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.611 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.611 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.611 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.611 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.612 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.612 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.612 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.612 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.613 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.613 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.613 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.613 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.614 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.614 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.614 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.614 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.614 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.615 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.615 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.615 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.615 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.615 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.616 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.616 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.616 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.616 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.617 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.617 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.617 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.617 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.617 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.618 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.618 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.618 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.618 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.619 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.619 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.619 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.619 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.619 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.620 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.620 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.620 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.620 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.621 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.621 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.621 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.621 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.622 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.622 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.622 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.622 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.622 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.623 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.623 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.623 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.623 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.624 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.624 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.624 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.624 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.625 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.625 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.625 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.625 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.625 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.625 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.626 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.626 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.626 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.626 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.627 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.627 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.627 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.627 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.627 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.628 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.628 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.628 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.628 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.628 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.629 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.629 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.629 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.629 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.629 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.630 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.630 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.630 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.630 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.630 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.631 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.631 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.631 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.631 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.632 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.632 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.632 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.632 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.632 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.633 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.633 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.633 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.633 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.633 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.634 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.634 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.634 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.634 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.634 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.635 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.635 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.635 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.635 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.636 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.636 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.636 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.637 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.637 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.637 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.637 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.637 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.638 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.638 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.638 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.638 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.638 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.639 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.639 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.639 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.639 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.640 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.640 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.640 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.640 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.640 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.641 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.641 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.641 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.641 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.641 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.642 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.642 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.642 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.642 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.643 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.643 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.643 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.643 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.644 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.644 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.644 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.644 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.644 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.645 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.645 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.645 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.646 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.646 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.646 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.646 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.647 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.647 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.647 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.648 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.648 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.648 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.648 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.648 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.649 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.649 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.649 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.649 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.649 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.649 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.650 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.650 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.650 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.650 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.650 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.651 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.651 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.651 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.651 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.651 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.652 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.652 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.652 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.652 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.652 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.652 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.653 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.653 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.653 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.653 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.653 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.654 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.654 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.654 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.654 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.654 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.654 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.655 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.655 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.655 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.655 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.655 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.656 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.656 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.656 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.656 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.656 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.657 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.657 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.657 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.657 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.657 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.658 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.658 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.658 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.658 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.658 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.658 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.659 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.659 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.659 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.659 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.659 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.660 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.660 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.660 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.660 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.660 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.660 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.661 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.661 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.661 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.661 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.661 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.661 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.662 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.662 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.662 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.662 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.662 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.663 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.663 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.663 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.663 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.663 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.664 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.664 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.664 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.664 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.664 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.664 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.665 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.665 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.665 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.665 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.665 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.666 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.666 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.666 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.666 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.666 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.666 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.667 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.667 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.667 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.667 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.667 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.668 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.668 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.668 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.668 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.668 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.668 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.669 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.669 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.669 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.669 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.669 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.670 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.670 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.670 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.670 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.670 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.670 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.671 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.671 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.671 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.671 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.671 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.672 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.672 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.672 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.672 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.672 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.672 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.673 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.673 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.673 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.673 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.673 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.674 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.674 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.674 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.674 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.674 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.675 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.675 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.675 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.675 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.675 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.676 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.676 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.676 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.676 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.676 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.677 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.677 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.677 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.677 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.677 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.677 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.678 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.678 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.678 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.678 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.678 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.679 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.679 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.679 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.679 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.679 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.679 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.680 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.680 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.680 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.680 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.680 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.681 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.681 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.681 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.681 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.681 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.682 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.682 252558 WARNING oslo_config.cfg [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 18:51:03 np0005535838 nova_compute[252550]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 18:51:03 np0005535838 nova_compute[252550]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 18:51:03 np0005535838 nova_compute[252550]: and ``live_migration_inbound_addr`` respectively.
Nov 25 18:51:03 np0005535838 nova_compute[252550]: ).  Its value may be silently ignored in the future.#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.682 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.682 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.682 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.683 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.683 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.683 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.683 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.683 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.684 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.684 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.684 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.684 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.684 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.684 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.685 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.685 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.685 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.685 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.685 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rbd_secret_uuid        = 101922db-575f-58e2-980f-928050464f69 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.686 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.686 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.686 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.686 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.686 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.686 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.687 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.687 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.687 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.687 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.687 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.688 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.688 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.688 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.688 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.688 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.689 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.689 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.689 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.689 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.689 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.689 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.690 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.690 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.690 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.690 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.690 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.691 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.691 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.691 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.691 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.692 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.692 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.692 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.692 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.692 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.692 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.693 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.693 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.693 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.693 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.693 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.693 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.694 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.694 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.694 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.694 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.694 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.695 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.695 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.695 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.695 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.695 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.695 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.696 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.696 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.696 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.696 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.696 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.697 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.697 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.697 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.697 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.697 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.698 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.698 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.698 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.698 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.698 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.698 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.699 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.699 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.699 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.699 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.699 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.700 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.700 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.700 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.700 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.700 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.700 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.701 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.701 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.701 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.701 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.701 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.702 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.702 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.702 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.702 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.702 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.702 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.703 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.703 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.703 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.703 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.703 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.704 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.704 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.704 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.704 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.704 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.704 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.705 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.705 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.705 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.705 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.705 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.706 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.706 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.706 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.706 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.706 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.706 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.707 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.707 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.707 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.707 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.707 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.708 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.708 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.708 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.708 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.708 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.709 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.709 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.709 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.709 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.709 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.709 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.710 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.710 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.710 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.710 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.710 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.711 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.711 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.711 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.711 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.711 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.712 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.712 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.712 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.712 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.712 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.712 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.713 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.713 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.713 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.713 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.713 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.714 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.714 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.714 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.714 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.714 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.714 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.715 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.715 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.715 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.715 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.715 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.716 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.716 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.716 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.716 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.716 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.717 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.717 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.717 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.717 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.717 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.717 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.718 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.718 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.718 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.718 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.718 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.719 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.719 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.719 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.719 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.719 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.719 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.720 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.720 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.720 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.720 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.720 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.721 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.721 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.721 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.721 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.721 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.721 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.722 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.722 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.722 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.722 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.722 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.722 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.723 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.723 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.723 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.723 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.723 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.724 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.724 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.724 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.724 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.724 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.725 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.725 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.725 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.725 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.725 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.725 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.726 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.726 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.726 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.726 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.726 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.727 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.727 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.727 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.727 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.727 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.728 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.728 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.729 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.729 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.729 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.729 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.729 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.730 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.730 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.730 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.730 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.730 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.731 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.731 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.731 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.731 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.731 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.731 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.732 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.732 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.732 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.732 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.732 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.733 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.733 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.733 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.733 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.733 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.733 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.734 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.734 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.734 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.734 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.734 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.735 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.735 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.735 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.735 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.735 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.736 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.736 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.736 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.736 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.736 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.736 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.737 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.737 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.737 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.737 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.737 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.738 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.738 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.738 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.738 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.738 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.738 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.739 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.739 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.739 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.739 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.739 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.740 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.740 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.740 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.740 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.740 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.741 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.741 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.741 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.741 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.741 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.741 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.742 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.742 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.742 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.742 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.742 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.743 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.743 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.743 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.743 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.743 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.743 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.744 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.744 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.744 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.744 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.744 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.745 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.745 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.745 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.745 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.745 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.745 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.746 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.746 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.746 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.746 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.746 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.747 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.747 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.747 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.747 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.747 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.748 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.748 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.748 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.748 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.748 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.748 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.749 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.749 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.749 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.749 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.749 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.749 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.750 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.750 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.750 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.750 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.750 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.750 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.751 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.751 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.751 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.751 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.751 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.752 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.752 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.752 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.752 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.752 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.752 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.753 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.753 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.753 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.753 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.753 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.754 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.754 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.754 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.754 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.754 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.754 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.755 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.755 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.755 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.755 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.755 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.756 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.756 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.756 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.756 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.756 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.756 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.757 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.757 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.757 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.757 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.757 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.758 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.758 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.758 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.758 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.758 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.758 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.759 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.759 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.759 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.759 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.759 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.759 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.760 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.760 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.760 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.760 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.760 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.761 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.762 252558 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.889 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.890 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.890 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.891 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.907 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0df9cee100> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.910 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0df9cee100> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.911 252558 INFO nova.virt.libvirt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.924 252558 INFO nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Libvirt host capabilities <capabilities>
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  <host>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <uuid>99edd01f-cb88-4b88-a56d-15f374f9d1d0</uuid>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <cpu>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <arch>x86_64</arch>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model>EPYC-Rome-v4</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <vendor>AMD</vendor>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <microcode version='16777317'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <signature family='23' model='49' stepping='0'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='x2apic'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='tsc-deadline'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='osxsave'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='hypervisor'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='tsc_adjust'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='spec-ctrl'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='stibp'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='arch-capabilities'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='ssbd'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='cmp_legacy'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='topoext'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='virt-ssbd'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='lbrv'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='tsc-scale'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='vmcb-clean'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='pause-filter'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='pfthreshold'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='svme-addr-chk'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='rdctl-no'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='skip-l1dfl-vmentry'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='mds-no'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature name='pschange-mc-no'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <pages unit='KiB' size='4'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <pages unit='KiB' size='2048'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <pages unit='KiB' size='1048576'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </cpu>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <power_management>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <suspend_mem/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </power_management>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <iommu support='no'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <migration_features>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <live/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <uri_transports>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <uri_transport>tcp</uri_transport>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <uri_transport>rdma</uri_transport>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </uri_transports>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </migration_features>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <topology>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <cells num='1'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <cell id='0'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:          <memory unit='KiB'>7864320</memory>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:          <pages unit='KiB' size='2048'>0</pages>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:          <distances>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:            <sibling id='0' value='10'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:          </distances>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:          <cpus num='8'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:          </cpus>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        </cell>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </cells>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </topology>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <cache>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </cache>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <secmodel>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model>selinux</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <doi>0</doi>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </secmodel>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <secmodel>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model>dac</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <doi>0</doi>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </secmodel>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  </host>
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  <guest>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <os_type>hvm</os_type>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <arch name='i686'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <wordsize>32</wordsize>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <domain type='qemu'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <domain type='kvm'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </arch>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <features>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <pae/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <nonpae/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <acpi default='on' toggle='yes'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <apic default='on' toggle='no'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <cpuselection/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <deviceboot/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <disksnapshot default='on' toggle='no'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <externalSnapshot/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </features>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  </guest>
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  <guest>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <os_type>hvm</os_type>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <arch name='x86_64'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <wordsize>64</wordsize>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <domain type='qemu'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <domain type='kvm'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </arch>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <features>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <acpi default='on' toggle='yes'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <apic default='on' toggle='no'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <cpuselection/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <deviceboot/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <disksnapshot default='on' toggle='no'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <externalSnapshot/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </features>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  </guest>
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 
Nov 25 18:51:03 np0005535838 nova_compute[252550]: </capabilities>
Nov 25 18:51:03 np0005535838 nova_compute[252550]: #033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.937 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.973 252558 WARNING nova.virt.libvirt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.974 252558 DEBUG nova.virt.libvirt.volume.mount [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 25 18:51:03 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.977 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 25 18:51:03 np0005535838 nova_compute[252550]: <domainCapabilities>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  <domain>kvm</domain>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  <arch>i686</arch>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  <vcpu max='240'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  <iothreads supported='yes'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  <os supported='yes'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <enum name='firmware'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <loader supported='yes'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <value>rom</value>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <value>pflash</value>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <enum name='readonly'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <value>yes</value>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <value>no</value>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <enum name='secure'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <value>no</value>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </loader>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  </os>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:  <cpu>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <mode name='host-passthrough' supported='yes'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <enum name='hostPassthroughMigratable'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <value>on</value>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <value>off</value>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <mode name='maximum' supported='yes'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <enum name='maximumMigratable'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <value>on</value>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <value>off</value>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <mode name='host-model' supported='yes'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <vendor>AMD</vendor>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='x2apic'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='hypervisor'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='stibp'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='ssbd'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='overflow-recov'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='succor'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='ibrs'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='amd-ssbd'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='lbrv'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='tsc-scale'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='flushbyasid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='pause-filter'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='pfthreshold'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='svme-addr-chk'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <feature policy='disable' name='xsaves'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:    <mode name='custom' supported='yes'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Broadwell'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-IBRS'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-noTSX'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v1'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v2'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v3'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v4'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Cooperlake'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:      <blockers model='Cooperlake-v1'>
Nov 25 18:51:03 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cooperlake-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Dhyana-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Genoa'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amd-psfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='auto-ibrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='no-nested-data-bp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='null-sel-clr-base'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='stibp-always-on'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amd-psfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='auto-ibrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='no-nested-data-bp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='null-sel-clr-base'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='stibp-always-on'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Milan'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Milan-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Milan-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amd-psfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='no-nested-data-bp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='null-sel-clr-base'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='stibp-always-on'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='GraniteRapids'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='prefetchiti'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='GraniteRapids-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='prefetchiti'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='GraniteRapids-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10-128'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10-256'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10-512'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='prefetchiti'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-noTSX'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v6'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v7'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='KnightsMill'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4fmaps'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4vnniw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512er'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512pf'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='KnightsMill-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4fmaps'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4vnniw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512er'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512pf'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G4-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tbm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G5-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tbm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SierraForest'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ne-convert'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cmpccxadd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SierraForest-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ne-convert'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cmpccxadd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='athlon'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='athlon-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='core2duo'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='core2duo-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='coreduo'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='coreduo-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='n270'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='n270-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='phenom'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='phenom-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </cpu>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <memoryBacking supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <enum name='sourceType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>file</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>anonymous</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>memfd</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </memoryBacking>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <devices>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <disk supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='diskDevice'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>disk</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>cdrom</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>floppy</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>lun</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='bus'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>ide</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>fdc</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>scsi</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>usb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>sata</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-non-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </disk>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <graphics supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vnc</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>egl-headless</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>dbus</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </graphics>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <video supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='modelType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vga</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>cirrus</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>none</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>bochs</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>ramfb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </video>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <hostdev supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='mode'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>subsystem</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='startupPolicy'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>default</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>mandatory</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>requisite</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>optional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='subsysType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>usb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pci</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>scsi</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='capsType'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='pciBackend'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </hostdev>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <rng supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-non-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendModel'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>random</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>egd</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>builtin</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </rng>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <filesystem supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='driverType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>path</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>handle</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtiofs</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </filesystem>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <tpm supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tpm-tis</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tpm-crb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendModel'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>emulator</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>external</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendVersion'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>2.0</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </tpm>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <redirdev supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='bus'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>usb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </redirdev>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <channel supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pty</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>unix</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </channel>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <crypto supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>qemu</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendModel'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>builtin</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </crypto>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <interface supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>default</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>passt</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </interface>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <panic supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>isa</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>hyperv</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </panic>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <console supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>null</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vc</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pty</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>dev</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>file</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pipe</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>stdio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>udp</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tcp</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>unix</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>qemu-vdagent</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>dbus</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </console>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </devices>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <features>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <gic supported='no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <vmcoreinfo supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <genid supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <backingStoreInput supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <backup supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <async-teardown supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <ps2 supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <sev supported='no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <sgx supported='no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <hyperv supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='features'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>relaxed</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vapic</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>spinlocks</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vpindex</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>runtime</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>synic</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>stimer</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>reset</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vendor_id</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>frequencies</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>reenlightenment</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tlbflush</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>ipi</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>avic</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>emsr_bitmap</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>xmm_input</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <defaults>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <spinlocks>4095</spinlocks>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <stimer_direct>on</stimer_direct>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </defaults>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </hyperv>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <launchSecurity supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='sectype'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tdx</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </launchSecurity>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </features>
Nov 25 18:51:04 np0005535838 nova_compute[252550]: </domainCapabilities>
Nov 25 18:51:04 np0005535838 nova_compute[252550]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 18:51:04 np0005535838 nova_compute[252550]: 2025-11-25 23:51:03.986 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 25 18:51:04 np0005535838 nova_compute[252550]: <domainCapabilities>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <domain>kvm</domain>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <arch>i686</arch>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <vcpu max='4096'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <iothreads supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <os supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <enum name='firmware'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <loader supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>rom</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pflash</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='readonly'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>yes</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>no</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='secure'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>no</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </loader>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </os>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <cpu>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <mode name='host-passthrough' supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='hostPassthroughMigratable'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>on</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>off</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <mode name='maximum' supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='maximumMigratable'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>on</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>off</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <mode name='host-model' supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <vendor>AMD</vendor>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='x2apic'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='hypervisor'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='stibp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='ssbd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='overflow-recov'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='succor'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='ibrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='amd-ssbd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='lbrv'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='tsc-scale'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='flushbyasid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='pause-filter'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='pfthreshold'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='svme-addr-chk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='disable' name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <mode name='custom' supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-noTSX'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cooperlake'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cooperlake-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cooperlake-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Dhyana-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Genoa'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amd-psfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='auto-ibrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='no-nested-data-bp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='null-sel-clr-base'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='stibp-always-on'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amd-psfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='auto-ibrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='no-nested-data-bp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='null-sel-clr-base'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='stibp-always-on'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Milan'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Milan-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Milan-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amd-psfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='no-nested-data-bp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='null-sel-clr-base'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='stibp-always-on'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='GraniteRapids'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='prefetchiti'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='GraniteRapids-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='prefetchiti'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='GraniteRapids-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10-128'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10-256'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10-512'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='prefetchiti'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-noTSX'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v6'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v7'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='KnightsMill'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4fmaps'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4vnniw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512er'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512pf'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='KnightsMill-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4fmaps'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4vnniw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512er'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512pf'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G4-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tbm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G5-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tbm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SierraForest'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ne-convert'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cmpccxadd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SierraForest-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ne-convert'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cmpccxadd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='athlon'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='athlon-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='core2duo'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='core2duo-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='coreduo'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='coreduo-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='n270'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='n270-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='phenom'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='phenom-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </cpu>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <memoryBacking supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <enum name='sourceType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>file</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>anonymous</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>memfd</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </memoryBacking>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <devices>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <disk supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='diskDevice'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>disk</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>cdrom</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>floppy</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>lun</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='bus'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>fdc</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>scsi</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>usb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>sata</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-non-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </disk>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <graphics supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vnc</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>egl-headless</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>dbus</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </graphics>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <video supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='modelType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vga</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>cirrus</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>none</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>bochs</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>ramfb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </video>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <hostdev supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='mode'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>subsystem</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='startupPolicy'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>default</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>mandatory</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>requisite</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>optional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='subsysType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>usb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pci</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>scsi</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='capsType'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='pciBackend'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </hostdev>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <rng supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-non-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendModel'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>random</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>egd</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>builtin</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </rng>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <filesystem supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='driverType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>path</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>handle</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtiofs</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </filesystem>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <tpm supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tpm-tis</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tpm-crb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendModel'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>emulator</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>external</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendVersion'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>2.0</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </tpm>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <redirdev supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='bus'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>usb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </redirdev>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <channel supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pty</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>unix</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </channel>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <crypto supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>qemu</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendModel'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>builtin</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </crypto>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <interface supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>default</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>passt</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </interface>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <panic supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>isa</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>hyperv</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </panic>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <console supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>null</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vc</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pty</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>dev</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>file</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pipe</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>stdio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>udp</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tcp</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>unix</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>qemu-vdagent</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>dbus</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </console>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </devices>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <features>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <gic supported='no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <vmcoreinfo supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <genid supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <backingStoreInput supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <backup supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <async-teardown supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <ps2 supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <sev supported='no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <sgx supported='no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <hyperv supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='features'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>relaxed</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vapic</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>spinlocks</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vpindex</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>runtime</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>synic</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>stimer</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>reset</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vendor_id</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>frequencies</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>reenlightenment</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tlbflush</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>ipi</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>avic</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>emsr_bitmap</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>xmm_input</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <defaults>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <spinlocks>4095</spinlocks>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <stimer_direct>on</stimer_direct>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </defaults>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </hyperv>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <launchSecurity supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='sectype'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tdx</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </launchSecurity>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </features>
Nov 25 18:51:04 np0005535838 nova_compute[252550]: </domainCapabilities>
Nov 25 18:51:04 np0005535838 nova_compute[252550]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 18:51:04 np0005535838 nova_compute[252550]: 2025-11-25 23:51:04.042 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 25 18:51:04 np0005535838 nova_compute[252550]: 2025-11-25 23:51:04.046 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 25 18:51:04 np0005535838 nova_compute[252550]: <domainCapabilities>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <domain>kvm</domain>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <arch>x86_64</arch>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <vcpu max='240'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <iothreads supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <os supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <enum name='firmware'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <loader supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>rom</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pflash</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='readonly'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>yes</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>no</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='secure'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>no</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </loader>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </os>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <cpu>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <mode name='host-passthrough' supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='hostPassthroughMigratable'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>on</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>off</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <mode name='maximum' supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='maximumMigratable'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>on</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>off</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <mode name='host-model' supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <vendor>AMD</vendor>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='x2apic'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='hypervisor'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='stibp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='ssbd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='overflow-recov'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='succor'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='ibrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='amd-ssbd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='lbrv'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='tsc-scale'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='flushbyasid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='pause-filter'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='pfthreshold'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='svme-addr-chk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='disable' name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <mode name='custom' supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-noTSX'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cooperlake'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cooperlake-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cooperlake-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Dhyana-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Genoa'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amd-psfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='auto-ibrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='no-nested-data-bp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='null-sel-clr-base'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='stibp-always-on'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amd-psfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='auto-ibrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='no-nested-data-bp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='null-sel-clr-base'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='stibp-always-on'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Milan'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Milan-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Milan-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amd-psfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='no-nested-data-bp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='null-sel-clr-base'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='stibp-always-on'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='GraniteRapids'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='prefetchiti'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='GraniteRapids-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='prefetchiti'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='GraniteRapids-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10-128'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10-256'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10-512'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='prefetchiti'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-noTSX'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v6'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v7'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='KnightsMill'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4fmaps'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4vnniw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512er'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512pf'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='KnightsMill-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4fmaps'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4vnniw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512er'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512pf'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G4-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tbm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G5-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tbm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SierraForest'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ne-convert'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cmpccxadd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SierraForest-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ne-convert'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cmpccxadd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='athlon'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='athlon-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='core2duo'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='core2duo-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='coreduo'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='coreduo-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='n270'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='n270-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='phenom'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='phenom-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </cpu>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <memoryBacking supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <enum name='sourceType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>file</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>anonymous</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>memfd</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </memoryBacking>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <devices>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <disk supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='diskDevice'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>disk</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>cdrom</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>floppy</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>lun</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='bus'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>ide</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>fdc</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>scsi</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>usb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>sata</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-non-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </disk>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <graphics supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vnc</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>egl-headless</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>dbus</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </graphics>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <video supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='modelType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vga</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>cirrus</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>none</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>bochs</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>ramfb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </video>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <hostdev supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='mode'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>subsystem</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='startupPolicy'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>default</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>mandatory</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>requisite</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>optional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='subsysType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>usb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pci</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>scsi</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='capsType'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='pciBackend'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </hostdev>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <rng supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-non-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendModel'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>random</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>egd</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>builtin</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </rng>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <filesystem supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='driverType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>path</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>handle</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtiofs</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </filesystem>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <tpm supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tpm-tis</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tpm-crb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendModel'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>emulator</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>external</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendVersion'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>2.0</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </tpm>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <redirdev supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='bus'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>usb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </redirdev>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <channel supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pty</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>unix</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </channel>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <crypto supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>qemu</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendModel'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>builtin</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </crypto>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <interface supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>default</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>passt</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </interface>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <panic supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>isa</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>hyperv</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </panic>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <console supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>null</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vc</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pty</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>dev</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>file</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pipe</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>stdio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>udp</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tcp</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>unix</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>qemu-vdagent</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>dbus</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </console>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </devices>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <features>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <gic supported='no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <vmcoreinfo supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <genid supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <backingStoreInput supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <backup supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <async-teardown supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <ps2 supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <sev supported='no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <sgx supported='no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <hyperv supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='features'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>relaxed</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vapic</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>spinlocks</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vpindex</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>runtime</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>synic</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>stimer</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>reset</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vendor_id</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>frequencies</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>reenlightenment</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tlbflush</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>ipi</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>avic</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>emsr_bitmap</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>xmm_input</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <defaults>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <spinlocks>4095</spinlocks>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <stimer_direct>on</stimer_direct>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </defaults>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </hyperv>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <launchSecurity supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='sectype'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tdx</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </launchSecurity>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </features>
Nov 25 18:51:04 np0005535838 nova_compute[252550]: </domainCapabilities>
Nov 25 18:51:04 np0005535838 nova_compute[252550]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 18:51:04 np0005535838 nova_compute[252550]: 2025-11-25 23:51:04.113 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 25 18:51:04 np0005535838 nova_compute[252550]: <domainCapabilities>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <domain>kvm</domain>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <arch>x86_64</arch>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <vcpu max='4096'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <iothreads supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <os supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <enum name='firmware'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>efi</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <loader supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>rom</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pflash</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='readonly'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>yes</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>no</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='secure'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>yes</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>no</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </loader>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </os>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <cpu>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <mode name='host-passthrough' supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='hostPassthroughMigratable'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>on</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>off</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <mode name='maximum' supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='maximumMigratable'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>on</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>off</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <mode name='host-model' supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <vendor>AMD</vendor>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='x2apic'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='hypervisor'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='stibp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='ssbd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='overflow-recov'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='succor'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='ibrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='amd-ssbd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='lbrv'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='tsc-scale'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='flushbyasid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='pause-filter'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='pfthreshold'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='svme-addr-chk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <feature policy='disable' name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <mode name='custom' supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-noTSX'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Broadwell-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cooperlake'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cooperlake-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Cooperlake-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Denverton-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Dhyana-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Genoa'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amd-psfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='auto-ibrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='no-nested-data-bp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='null-sel-clr-base'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='stibp-always-on'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amd-psfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='auto-ibrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='no-nested-data-bp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='null-sel-clr-base'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='stibp-always-on'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Milan'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Milan-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Milan-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amd-psfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='no-nested-data-bp'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='null-sel-clr-base'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='stibp-always-on'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-Rome-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='EPYC-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='GraniteRapids'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='prefetchiti'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='GraniteRapids-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='prefetchiti'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='GraniteRapids-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10-128'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10-256'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx10-512'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='prefetchiti'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-noTSX'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Haswell-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v6'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Icelake-Server-v7'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='IvyBridge-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='KnightsMill'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4fmaps'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4vnniw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512er'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512pf'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='KnightsMill-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4fmaps'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-4vnniw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512er'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512pf'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G4-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tbm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Opteron_G5-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fma4'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tbm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xop'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SapphireRapids-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='amx-tile'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-bf16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-fp16'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512-vpopcntdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bitalg'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vbmi2'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrc'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fzrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='la57'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='taa-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='tsx-ldtrk'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xfd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SierraForest'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ne-convert'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cmpccxadd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='SierraForest-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ifma'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-ne-convert'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx-vnni-int8'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='bus-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cmpccxadd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fbsdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='fsrs'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ibrs-all'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mcdt-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pbrsb-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='psdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='sbdr-ssdp-no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='serialize'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vaes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='vpclmulqdq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Client-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='hle'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='rtm'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Skylake-Server-v5'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512bw'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512cd'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512dq'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512f'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='avx512vl'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='invpcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pcid'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='pku'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='mpx'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v2'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v3'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='core-capability'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='split-lock-detect'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='Snowridge-v4'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='cldemote'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='erms'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='gfni'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdir64b'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='movdiri'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='xsaves'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='athlon'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='athlon-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='core2duo'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='core2duo-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='coreduo'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='coreduo-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='n270'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='n270-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='ss'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='phenom'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <blockers model='phenom-v1'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnow'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <feature name='3dnowext'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </blockers>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </mode>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </cpu>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <memoryBacking supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <enum name='sourceType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>file</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>anonymous</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <value>memfd</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </memoryBacking>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <devices>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <disk supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='diskDevice'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>disk</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>cdrom</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>floppy</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>lun</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='bus'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>fdc</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>scsi</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>usb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>sata</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-non-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </disk>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <graphics supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vnc</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>egl-headless</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>dbus</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </graphics>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <video supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='modelType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vga</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>cirrus</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>none</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>bochs</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>ramfb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </video>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <hostdev supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='mode'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>subsystem</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='startupPolicy'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>default</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>mandatory</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>requisite</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>optional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='subsysType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>usb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pci</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>scsi</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='capsType'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='pciBackend'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </hostdev>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <rng supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtio-non-transitional</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendModel'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>random</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>egd</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>builtin</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </rng>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <filesystem supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='driverType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>path</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>handle</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>virtiofs</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </filesystem>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <tpm supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tpm-tis</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tpm-crb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendModel'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>emulator</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>external</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendVersion'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>2.0</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </tpm>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <redirdev supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='bus'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>usb</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </redirdev>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <channel supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pty</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>unix</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </channel>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <crypto supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>qemu</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendModel'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>builtin</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </crypto>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <interface supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='backendType'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>default</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>passt</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </interface>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <panic supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='model'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>isa</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>hyperv</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </panic>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <console supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='type'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>null</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vc</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pty</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>dev</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>file</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>pipe</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>stdio</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>udp</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tcp</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>unix</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>qemu-vdagent</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>dbus</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </console>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </devices>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  <features>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <gic supported='no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <vmcoreinfo supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <genid supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <backingStoreInput supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <backup supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <async-teardown supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <ps2 supported='yes'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <sev supported='no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <sgx supported='no'/>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <hyperv supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='features'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>relaxed</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vapic</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>spinlocks</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vpindex</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>runtime</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>synic</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>stimer</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>reset</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>vendor_id</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>frequencies</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>reenlightenment</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tlbflush</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>ipi</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>avic</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>emsr_bitmap</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>xmm_input</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <defaults>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <spinlocks>4095</spinlocks>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <stimer_direct>on</stimer_direct>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </defaults>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </hyperv>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    <launchSecurity supported='yes'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      <enum name='sectype'>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:        <value>tdx</value>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:      </enum>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:    </launchSecurity>
Nov 25 18:51:04 np0005535838 nova_compute[252550]:  </features>
Nov 25 18:51:04 np0005535838 nova_compute[252550]: </domainCapabilities>
Nov 25 18:51:04 np0005535838 nova_compute[252550]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 18:51:04 np0005535838 nova_compute[252550]: 2025-11-25 23:51:04.175 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 25 18:51:04 np0005535838 nova_compute[252550]: 2025-11-25 23:51:04.175 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 25 18:51:04 np0005535838 nova_compute[252550]: 2025-11-25 23:51:04.175 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 25 18:51:04 np0005535838 nova_compute[252550]: 2025-11-25 23:51:04.176 252558 INFO nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Secure Boot support detected#033[00m
Nov 25 18:51:04 np0005535838 nova_compute[252550]: 2025-11-25 23:51:04.177 252558 INFO nova.virt.libvirt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 25 18:51:04 np0005535838 nova_compute[252550]: 2025-11-25 23:51:04.178 252558 INFO nova.virt.libvirt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 25 18:51:04 np0005535838 nova_compute[252550]: 2025-11-25 23:51:04.185 252558 DEBUG nova.virt.libvirt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 25 18:51:04 np0005535838 nova_compute[252550]: 2025-11-25 23:51:04.364 252558 INFO nova.virt.node [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Determined node identity 08547965-b35f-4b7b-95d8-902f06aa011c from /var/lib/nova/compute_id#033[00m
Nov 25 18:51:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v628: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:04 np0005535838 nova_compute[252550]: 2025-11-25 23:51:04.809 252558 WARNING nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Compute nodes ['08547965-b35f-4b7b-95d8-902f06aa011c'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 25 18:51:05 np0005535838 nova_compute[252550]: 2025-11-25 23:51:05.206 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 25 18:51:05 np0005535838 nova_compute[252550]: 2025-11-25 23:51:05.267 252558 WARNING nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 25 18:51:05 np0005535838 nova_compute[252550]: 2025-11-25 23:51:05.267 252558 DEBUG oslo_concurrency.lockutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:51:05 np0005535838 nova_compute[252550]: 2025-11-25 23:51:05.268 252558 DEBUG oslo_concurrency.lockutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:51:05 np0005535838 nova_compute[252550]: 2025-11-25 23:51:05.268 252558 DEBUG oslo_concurrency.lockutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:51:05 np0005535838 nova_compute[252550]: 2025-11-25 23:51:05.268 252558 DEBUG nova.compute.resource_tracker [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 18:51:05 np0005535838 nova_compute[252550]: 2025-11-25 23:51:05.269 252558 DEBUG oslo_concurrency.processutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:51:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:51:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:51:05 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4294696395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:51:05 np0005535838 nova_compute[252550]: 2025-11-25 23:51:05.662 252558 DEBUG oslo_concurrency.processutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:51:05 np0005535838 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 18:51:05 np0005535838 systemd[1]: Started libvirt nodedev daemon.
Nov 25 18:51:05 np0005535838 nova_compute[252550]: 2025-11-25 23:51:05.953 252558 WARNING nova.virt.libvirt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 18:51:05 np0005535838 nova_compute[252550]: 2025-11-25 23:51:05.955 252558 DEBUG nova.compute.resource_tracker [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5304MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 18:51:05 np0005535838 nova_compute[252550]: 2025-11-25 23:51:05.955 252558 DEBUG oslo_concurrency.lockutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:51:05 np0005535838 nova_compute[252550]: 2025-11-25 23:51:05.955 252558 DEBUG oslo_concurrency.lockutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:51:05 np0005535838 nova_compute[252550]: 2025-11-25 23:51:05.976 252558 WARNING nova.compute.resource_tracker [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] No compute node record for compute-0.ctlplane.example.com:08547965-b35f-4b7b-95d8-902f06aa011c: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 08547965-b35f-4b7b-95d8-902f06aa011c could not be found.#033[00m
Nov 25 18:51:06 np0005535838 nova_compute[252550]: 2025-11-25 23:51:06.016 252558 INFO nova.compute.resource_tracker [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 08547965-b35f-4b7b-95d8-902f06aa011c#033[00m
Nov 25 18:51:06 np0005535838 nova_compute[252550]: 2025-11-25 23:51:06.074 252558 DEBUG nova.compute.resource_tracker [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 18:51:06 np0005535838 nova_compute[252550]: 2025-11-25 23:51:06.074 252558 DEBUG nova.compute.resource_tracker [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 18:51:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v629: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:07 np0005535838 nova_compute[252550]: 2025-11-25 23:51:07.029 252558 INFO nova.scheduler.client.report [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [req-36d7e41c-c7ff-4335-87d4-e2ccdcdfbbe6] Created resource provider record via placement API for resource provider with UUID 08547965-b35f-4b7b-95d8-902f06aa011c and name compute-0.ctlplane.example.com.#033[00m
Nov 25 18:51:07 np0005535838 nova_compute[252550]: 2025-11-25 23:51:07.404 252558 DEBUG oslo_concurrency.processutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:51:07 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:51:07 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322178470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:51:07 np0005535838 nova_compute[252550]: 2025-11-25 23:51:07.809 252558 DEBUG oslo_concurrency.processutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:51:07 np0005535838 nova_compute[252550]: 2025-11-25 23:51:07.815 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 25 18:51:07 np0005535838 nova_compute[252550]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 25 18:51:07 np0005535838 nova_compute[252550]: 2025-11-25 23:51:07.815 252558 INFO nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 25 18:51:07 np0005535838 nova_compute[252550]: 2025-11-25 23:51:07.816 252558 DEBUG nova.compute.provider_tree [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Updating inventory in ProviderTree for provider 08547965-b35f-4b7b-95d8-902f06aa011c with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 18:51:07 np0005535838 nova_compute[252550]: 2025-11-25 23:51:07.817 252558 DEBUG nova.virt.libvirt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 18:51:07 np0005535838 nova_compute[252550]: 2025-11-25 23:51:07.895 252558 DEBUG nova.scheduler.client.report [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Updated inventory for provider 08547965-b35f-4b7b-95d8-902f06aa011c with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 25 18:51:07 np0005535838 nova_compute[252550]: 2025-11-25 23:51:07.895 252558 DEBUG nova.compute.provider_tree [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Updating resource provider 08547965-b35f-4b7b-95d8-902f06aa011c generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 25 18:51:07 np0005535838 nova_compute[252550]: 2025-11-25 23:51:07.896 252558 DEBUG nova.compute.provider_tree [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Updating inventory in ProviderTree for provider 08547965-b35f-4b7b-95d8-902f06aa011c with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 18:51:08 np0005535838 nova_compute[252550]: 2025-11-25 23:51:08.026 252558 DEBUG nova.compute.provider_tree [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Updating resource provider 08547965-b35f-4b7b-95d8-902f06aa011c generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 25 18:51:08 np0005535838 nova_compute[252550]: 2025-11-25 23:51:08.069 252558 DEBUG nova.compute.resource_tracker [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 18:51:08 np0005535838 nova_compute[252550]: 2025-11-25 23:51:08.070 252558 DEBUG oslo_concurrency.lockutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:51:08 np0005535838 nova_compute[252550]: 2025-11-25 23:51:08.070 252558 DEBUG nova.service [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 25 18:51:08 np0005535838 nova_compute[252550]: 2025-11-25 23:51:08.148 252558 DEBUG nova.service [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 25 18:51:08 np0005535838 nova_compute[252550]: 2025-11-25 23:51:08.148 252558 DEBUG nova.servicegroup.drivers.db [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 25 18:51:08 np0005535838 podman[252949]: 2025-11-25 23:51:08.240929911 +0000 UTC m=+0.058961483 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:51:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:10 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:51:10 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3046 writes, 12K keys, 3046 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s#012Cumulative WAL: 3046 writes, 3046 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1278 writes, 5305 keys, 1278 commit groups, 1.0 writes per commit group, ingest: 5.67 MB, 0.01 MB/s#012Interval WAL: 1278 writes, 1278 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    113.2      0.09              0.05         6    0.015       0      0       0.0       0.0#012  L6      1/0    4.62 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4    172.2    140.5      0.17              0.10         5    0.035     16K   2263       0.0       0.0#012 Sum      1/0    4.62 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4    113.6    131.2      0.26              0.15        11    0.024     16K   2263       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5    121.1    123.5      0.15              0.09         6    0.026     10K   1494       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    172.2    140.5      0.17              0.10         5    0.035     16K   2263       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    116.2      0.09              0.05         5    0.017       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.8      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.010, interval 0.004#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.03 MB/s write, 0.03 GB read, 0.02 MB/s read, 0.3 seconds#012Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f0edcc31f0#2 capacity: 308.00 MB usage: 1.22 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(83,1.07 MB,0.345929%) FilterBlock(12,54.36 KB,0.0172355%) IndexBlock(12,107.14 KB,0.0339706%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 18:51:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:51:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v632: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:15 np0005535838 podman[252969]: 2025-11-25 23:51:15.277158458 +0000 UTC m=+0.107241022 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 18:51:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:51:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v634: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:17 np0005535838 podman[252996]: 2025-11-25 23:51:17.258233886 +0000 UTC m=+0.078678726 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:51:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v635: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:51:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:51:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:51:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:51:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:51:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:51:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:51:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:51:26 np0005535838 nova_compute[252550]: 2025-11-25 23:51:26.150 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:51:26 np0005535838 nova_compute[252550]: 2025-11-25 23:51:26.351 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:51:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v639: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:51:26 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3410503557' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:51:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:51:26 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3410503557' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:51:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:51:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3915266591' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:51:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:51:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3915266591' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:51:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v640: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:51:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v642: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:51:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v644: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:39 np0005535838 podman[253016]: 2025-11-25 23:51:39.245436986 +0000 UTC m=+0.067882149 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 18:51:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:51:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:51:40.759 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:51:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:51:40.760 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:51:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:51:40.760 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:51:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:51:41 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3570208736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:51:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:51:41 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3570208736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:51:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v648: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:51:46 np0005535838 podman[253036]: 2025-11-25 23:51:46.336503135 +0000 UTC m=+0.158819439 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 18:51:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v649: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:48 np0005535838 podman[253064]: 2025-11-25 23:51:48.287621939 +0000 UTC m=+0.108536426 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 18:51:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v650: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:51:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v653: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:51:56
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['images', 'backups', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms']
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:51:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:51:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v655: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:52:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:52:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:52:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:52:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:52:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:52:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev ec3420d9-9566-43cd-8118-96d9e585ff52 does not exist
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 750da9d5-d847-4d04-9af5-a9ad762e81b0 does not exist
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev df1494c2-f84f-4f77-b4e8-a529ef8ad0e3 does not exist
Nov 25 18:52:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:52:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:52:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:52:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:52:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:52:01 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:52:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:52:02 np0005535838 podman[253358]: 2025-11-25 23:52:02.381352279 +0000 UTC m=+0.039035455 container create 5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:52:02 np0005535838 systemd[1]: Started libpod-conmon-5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79.scope.
Nov 25 18:52:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:02 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:52:02 np0005535838 podman[253358]: 2025-11-25 23:52:02.362460079 +0000 UTC m=+0.020143245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:52:02 np0005535838 podman[253358]: 2025-11-25 23:52:02.47387387 +0000 UTC m=+0.131557046 container init 5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:52:02 np0005535838 podman[253358]: 2025-11-25 23:52:02.481408539 +0000 UTC m=+0.139091715 container start 5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 18:52:02 np0005535838 podman[253358]: 2025-11-25 23:52:02.484756968 +0000 UTC m=+0.142440154 container attach 5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:52:02 np0005535838 fervent_khayyam[253374]: 167 167
Nov 25 18:52:02 np0005535838 systemd[1]: libpod-5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79.scope: Deactivated successfully.
Nov 25 18:52:02 np0005535838 conmon[253374]: conmon 5a6e355ddf9adb78f3a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79.scope/container/memory.events
Nov 25 18:52:02 np0005535838 podman[253358]: 2025-11-25 23:52:02.488776665 +0000 UTC m=+0.146459801 container died 5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:52:02 np0005535838 systemd[1]: var-lib-containers-storage-overlay-9a1758dbe8b1e3bcbe70968a02a8dee654ecb8882eab207a28ab3d022afbecb4-merged.mount: Deactivated successfully.
Nov 25 18:52:02 np0005535838 podman[253358]: 2025-11-25 23:52:02.533340445 +0000 UTC m=+0.191023611 container remove 5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 18:52:02 np0005535838 systemd[1]: libpod-conmon-5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79.scope: Deactivated successfully.
Nov 25 18:52:02 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:52:02 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:52:02 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:52:02 np0005535838 podman[253397]: 2025-11-25 23:52:02.777337578 +0000 UTC m=+0.078689855 container create ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 18:52:02 np0005535838 systemd[1]: Started libpod-conmon-ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e.scope.
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.824 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.826 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.826 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.827 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 18:52:02 np0005535838 podman[253397]: 2025-11-25 23:52:02.746462581 +0000 UTC m=+0.047814908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:52:02 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:52:02 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903d9fbcd747f7003f7a12b6c1cf78c037bcac7393ce317cf388b618e7eb7464/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:52:02 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903d9fbcd747f7003f7a12b6c1cf78c037bcac7393ce317cf388b618e7eb7464/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:52:02 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903d9fbcd747f7003f7a12b6c1cf78c037bcac7393ce317cf388b618e7eb7464/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:52:02 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903d9fbcd747f7003f7a12b6c1cf78c037bcac7393ce317cf388b618e7eb7464/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:52:02 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903d9fbcd747f7003f7a12b6c1cf78c037bcac7393ce317cf388b618e7eb7464/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:52:02 np0005535838 podman[253397]: 2025-11-25 23:52:02.88083251 +0000 UTC m=+0.182184827 container init ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:52:02 np0005535838 podman[253397]: 2025-11-25 23:52:02.889067448 +0000 UTC m=+0.190419725 container start ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 18:52:02 np0005535838 podman[253397]: 2025-11-25 23:52:02.892767496 +0000 UTC m=+0.194119773 container attach ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.899 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.899 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.900 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.901 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.901 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.902 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.902 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.903 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.903 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.947 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.948 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.948 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.948 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 18:52:02 np0005535838 nova_compute[252550]: 2025-11-25 23:52:02.949 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:52:03 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:52:03 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2052798394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:52:03 np0005535838 nova_compute[252550]: 2025-11-25 23:52:03.362 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:52:03 np0005535838 nova_compute[252550]: 2025-11-25 23:52:03.601 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 18:52:03 np0005535838 nova_compute[252550]: 2025-11-25 23:52:03.605 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5260MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 18:52:03 np0005535838 nova_compute[252550]: 2025-11-25 23:52:03.606 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:52:03 np0005535838 nova_compute[252550]: 2025-11-25 23:52:03.606 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:52:03 np0005535838 nova_compute[252550]: 2025-11-25 23:52:03.723 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 18:52:03 np0005535838 nova_compute[252550]: 2025-11-25 23:52:03.724 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 18:52:03 np0005535838 nova_compute[252550]: 2025-11-25 23:52:03.754 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:52:03 np0005535838 competent_franklin[253414]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:52:03 np0005535838 competent_franklin[253414]: --> relative data size: 1.0
Nov 25 18:52:03 np0005535838 competent_franklin[253414]: --> All data devices are unavailable
Nov 25 18:52:03 np0005535838 systemd[1]: libpod-ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e.scope: Deactivated successfully.
Nov 25 18:52:03 np0005535838 podman[253397]: 2025-11-25 23:52:03.951582542 +0000 UTC m=+1.252934779 container died ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 18:52:03 np0005535838 systemd[1]: var-lib-containers-storage-overlay-903d9fbcd747f7003f7a12b6c1cf78c037bcac7393ce317cf388b618e7eb7464-merged.mount: Deactivated successfully.
Nov 25 18:52:04 np0005535838 podman[253397]: 2025-11-25 23:52:04.039745397 +0000 UTC m=+1.341097664 container remove ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 18:52:04 np0005535838 systemd[1]: libpod-conmon-ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e.scope: Deactivated successfully.
Nov 25 18:52:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:52:04 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1501805013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:52:04 np0005535838 nova_compute[252550]: 2025-11-25 23:52:04.217 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:52:04 np0005535838 nova_compute[252550]: 2025-11-25 23:52:04.225 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 18:52:04 np0005535838 nova_compute[252550]: 2025-11-25 23:52:04.248 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 18:52:04 np0005535838 nova_compute[252550]: 2025-11-25 23:52:04.292 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 18:52:04 np0005535838 nova_compute[252550]: 2025-11-25 23:52:04.292 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:52:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v658: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:04 np0005535838 podman[253641]: 2025-11-25 23:52:04.781142427 +0000 UTC m=+0.039913789 container create f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:52:04 np0005535838 systemd[1]: Started libpod-conmon-f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee.scope.
Nov 25 18:52:04 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:52:04 np0005535838 podman[253641]: 2025-11-25 23:52:04.763567781 +0000 UTC m=+0.022339193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:52:04 np0005535838 podman[253641]: 2025-11-25 23:52:04.869506207 +0000 UTC m=+0.128277589 container init f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 25 18:52:04 np0005535838 podman[253641]: 2025-11-25 23:52:04.881422083 +0000 UTC m=+0.140193445 container start f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 25 18:52:04 np0005535838 podman[253641]: 2025-11-25 23:52:04.885547453 +0000 UTC m=+0.144318785 container attach f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 18:52:04 np0005535838 interesting_chatterjee[253658]: 167 167
Nov 25 18:52:04 np0005535838 systemd[1]: libpod-f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee.scope: Deactivated successfully.
Nov 25 18:52:04 np0005535838 podman[253641]: 2025-11-25 23:52:04.889775785 +0000 UTC m=+0.148547147 container died f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:52:04 np0005535838 systemd[1]: var-lib-containers-storage-overlay-bc548d54a43c0e581f94437fa3a8812ae7ba65f8fb95e3b8686d74358e3df1d8-merged.mount: Deactivated successfully.
Nov 25 18:52:04 np0005535838 podman[253641]: 2025-11-25 23:52:04.93337789 +0000 UTC m=+0.192149252 container remove f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:52:04 np0005535838 systemd[1]: libpod-conmon-f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee.scope: Deactivated successfully.
Nov 25 18:52:05 np0005535838 podman[253682]: 2025-11-25 23:52:05.129855134 +0000 UTC m=+0.048172157 container create ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 25 18:52:05 np0005535838 systemd[1]: Started libpod-conmon-ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05.scope.
Nov 25 18:52:05 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:52:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b62a8ded2e9e327dd7b60ec311efd791ce4d093e3eeece4b399562ee4858d54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:52:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b62a8ded2e9e327dd7b60ec311efd791ce4d093e3eeece4b399562ee4858d54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:52:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b62a8ded2e9e327dd7b60ec311efd791ce4d093e3eeece4b399562ee4858d54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:52:05 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b62a8ded2e9e327dd7b60ec311efd791ce4d093e3eeece4b399562ee4858d54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:52:05 np0005535838 podman[253682]: 2025-11-25 23:52:05.115826592 +0000 UTC m=+0.034143655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:52:05 np0005535838 podman[253682]: 2025-11-25 23:52:05.220976017 +0000 UTC m=+0.139293070 container init ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 18:52:05 np0005535838 podman[253682]: 2025-11-25 23:52:05.23805941 +0000 UTC m=+0.156376483 container start ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:52:05 np0005535838 podman[253682]: 2025-11-25 23:52:05.242066836 +0000 UTC m=+0.160383879 container attach ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:52:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]: {
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:    "0": [
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:        {
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "devices": [
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "/dev/loop3"
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            ],
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_name": "ceph_lv0",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_size": "21470642176",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "name": "ceph_lv0",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "tags": {
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.cluster_name": "ceph",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.crush_device_class": "",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.encrypted": "0",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.osd_id": "0",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.type": "block",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.vdo": "0"
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            },
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "type": "block",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "vg_name": "ceph_vg0"
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:        }
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:    ],
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:    "1": [
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:        {
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "devices": [
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "/dev/loop4"
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            ],
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_name": "ceph_lv1",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_size": "21470642176",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "name": "ceph_lv1",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "tags": {
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.cluster_name": "ceph",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.crush_device_class": "",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.encrypted": "0",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.osd_id": "1",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.type": "block",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.vdo": "0"
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            },
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "type": "block",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "vg_name": "ceph_vg1"
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:        }
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:    ],
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:    "2": [
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:        {
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "devices": [
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "/dev/loop5"
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            ],
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_name": "ceph_lv2",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_size": "21470642176",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "name": "ceph_lv2",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "tags": {
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.cluster_name": "ceph",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.crush_device_class": "",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.encrypted": "0",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.osd_id": "2",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.type": "block",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:                "ceph.vdo": "0"
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            },
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "type": "block",
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:            "vg_name": "ceph_vg2"
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:        }
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]:    ]
Nov 25 18:52:05 np0005535838 stupefied_fermi[253698]: }
Nov 25 18:52:06 np0005535838 systemd[1]: libpod-ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05.scope: Deactivated successfully.
Nov 25 18:52:06 np0005535838 podman[253682]: 2025-11-25 23:52:06.001881493 +0000 UTC m=+0.920198546 container died ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 18:52:06 np0005535838 systemd[1]: var-lib-containers-storage-overlay-2b62a8ded2e9e327dd7b60ec311efd791ce4d093e3eeece4b399562ee4858d54-merged.mount: Deactivated successfully.
Nov 25 18:52:06 np0005535838 podman[253682]: 2025-11-25 23:52:06.060872445 +0000 UTC m=+0.979189488 container remove ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:52:06 np0005535838 systemd[1]: libpod-conmon-ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05.scope: Deactivated successfully.
Nov 25 18:52:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v659: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:06 np0005535838 podman[253860]: 2025-11-25 23:52:06.811755245 +0000 UTC m=+0.039393144 container create 6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:52:06 np0005535838 systemd[1]: Started libpod-conmon-6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c.scope.
Nov 25 18:52:06 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:52:06 np0005535838 podman[253860]: 2025-11-25 23:52:06.79305912 +0000 UTC m=+0.020697029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:52:06 np0005535838 podman[253860]: 2025-11-25 23:52:06.905800346 +0000 UTC m=+0.133438255 container init 6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:52:06 np0005535838 podman[253860]: 2025-11-25 23:52:06.917666961 +0000 UTC m=+0.145304890 container start 6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 18:52:06 np0005535838 podman[253860]: 2025-11-25 23:52:06.92139115 +0000 UTC m=+0.149029049 container attach 6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 18:52:06 np0005535838 quizzical_goldstine[253876]: 167 167
Nov 25 18:52:06 np0005535838 systemd[1]: libpod-6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c.scope: Deactivated successfully.
Nov 25 18:52:06 np0005535838 podman[253860]: 2025-11-25 23:52:06.926749251 +0000 UTC m=+0.154387220 container died 6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:52:06 np0005535838 systemd[1]: var-lib-containers-storage-overlay-35b40ebc2af17f551843363b420e3d9c3b207cca71866bbc43758faef6c0eb9d-merged.mount: Deactivated successfully.
Nov 25 18:52:06 np0005535838 podman[253860]: 2025-11-25 23:52:06.972468952 +0000 UTC m=+0.200106881 container remove 6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:52:06 np0005535838 systemd[1]: libpod-conmon-6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c.scope: Deactivated successfully.
Nov 25 18:52:07 np0005535838 podman[253899]: 2025-11-25 23:52:07.198829779 +0000 UTC m=+0.048205039 container create c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 18:52:07 np0005535838 podman[253899]: 2025-11-25 23:52:07.172898401 +0000 UTC m=+0.022273721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:52:07 np0005535838 systemd[1]: Started libpod-conmon-c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09.scope.
Nov 25 18:52:07 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:52:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e0bbaaaf36b3b957c904d5fc81dbd58f2a4e63220324b77350dd29e8696b82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:52:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e0bbaaaf36b3b957c904d5fc81dbd58f2a4e63220324b77350dd29e8696b82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:52:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e0bbaaaf36b3b957c904d5fc81dbd58f2a4e63220324b77350dd29e8696b82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:52:07 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e0bbaaaf36b3b957c904d5fc81dbd58f2a4e63220324b77350dd29e8696b82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:52:07 np0005535838 podman[253899]: 2025-11-25 23:52:07.313601289 +0000 UTC m=+0.162976549 container init c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:52:07 np0005535838 podman[253899]: 2025-11-25 23:52:07.323182513 +0000 UTC m=+0.172557733 container start c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 18:52:07 np0005535838 podman[253899]: 2025-11-25 23:52:07.326759347 +0000 UTC m=+0.176134617 container attach c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]: {
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "osd_id": 2,
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "type": "bluestore"
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:    },
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "osd_id": 1,
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "type": "bluestore"
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:    },
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "osd_id": 0,
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:        "type": "bluestore"
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]:    }
Nov 25 18:52:08 np0005535838 jolly_brattain[253916]: }
Nov 25 18:52:08 np0005535838 systemd[1]: libpod-c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09.scope: Deactivated successfully.
Nov 25 18:52:08 np0005535838 conmon[253916]: conmon c27d22830548d313eb83 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09.scope/container/memory.events
Nov 25 18:52:08 np0005535838 podman[253899]: 2025-11-25 23:52:08.256793623 +0000 UTC m=+1.106168843 container died c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 18:52:08 np0005535838 systemd[1]: var-lib-containers-storage-overlay-74e0bbaaaf36b3b957c904d5fc81dbd58f2a4e63220324b77350dd29e8696b82-merged.mount: Deactivated successfully.
Nov 25 18:52:08 np0005535838 podman[253899]: 2025-11-25 23:52:08.315466167 +0000 UTC m=+1.164841377 container remove c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 18:52:08 np0005535838 systemd[1]: libpod-conmon-c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09.scope: Deactivated successfully.
Nov 25 18:52:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:52:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:52:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:52:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:52:08 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 166a13fc-f909-4fe2-adff-4ac89d3f399a does not exist
Nov 25 18:52:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:09 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:52:09 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:52:10 np0005535838 podman[254009]: 2025-11-25 23:52:10.283158869 +0000 UTC m=+0.097187686 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Nov 25 18:52:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v661: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:52:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v663: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2037975218' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 18:52:15 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14322 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 18:52:15 np0005535838 ceph-mgr[75954]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 18:52:15 np0005535838 ceph-mgr[75954]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.600898) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114735600934, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1992, "num_deletes": 506, "total_data_size": 1901277, "memory_usage": 1938544, "flush_reason": "Manual Compaction"}
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114735615875, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1854530, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12147, "largest_seqno": 14138, "table_properties": {"data_size": 1846000, "index_size": 4772, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 19787, "raw_average_key_size": 18, "raw_value_size": 1826852, "raw_average_value_size": 1712, "num_data_blocks": 219, "num_entries": 1067, "num_filter_entries": 1067, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764114546, "oldest_key_time": 1764114546, "file_creation_time": 1764114735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 15028 microseconds, and 7974 cpu microseconds.
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.615924) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1854530 bytes OK
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.615944) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.617767) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.617787) EVENT_LOG_v1 {"time_micros": 1764114735617781, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.617805) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1891819, prev total WAL file size 1891819, number of live WAL files 2.
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.618763) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1811KB)], [32(4728KB)]
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114735618802, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 6696543, "oldest_snapshot_seqno": -1}
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3289 keys, 5268892 bytes, temperature: kUnknown
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114735658050, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 5268892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5244521, "index_size": 15046, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 77873, "raw_average_key_size": 23, "raw_value_size": 5183030, "raw_average_value_size": 1575, "num_data_blocks": 650, "num_entries": 3289, "num_filter_entries": 3289, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764114735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.658736) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 5268892 bytes
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.660483) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.6 rd, 132.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 4.6 +0.0 blob) out(5.0 +0.0 blob), read-write-amplify(6.5) write-amplify(2.8) OK, records in: 4314, records dropped: 1025 output_compression: NoCompression
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.660513) EVENT_LOG_v1 {"time_micros": 1764114735660498, "job": 14, "event": "compaction_finished", "compaction_time_micros": 39714, "compaction_time_cpu_micros": 23263, "output_level": 6, "num_output_files": 1, "total_output_size": 5268892, "num_input_records": 4314, "num_output_records": 3289, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114735661361, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114735662994, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.618660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.663111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.663117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.663120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.663123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:52:15 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.663126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:52:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:17 np0005535838 podman[254030]: 2025-11-25 23:52:17.302089864 +0000 UTC m=+0.128172536 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 18:52:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:19 np0005535838 podman[254057]: 2025-11-25 23:52:19.263520279 +0000 UTC m=+0.090315263 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 18:52:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:52:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v667: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v668: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:52:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:52:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:52:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:52:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:52:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:52:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:52:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 25 18:52:30 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2617582484' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 18:52:30 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14324 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 18:52:30 np0005535838 ceph-mgr[75954]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 18:52:30 np0005535838 ceph-mgr[75954]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 18:52:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:52:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:52:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v674: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v675: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:52:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:52:40.760 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:52:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:52:40.760 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:52:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:52:40.761 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:52:41 np0005535838 podman[254076]: 2025-11-25 23:52:41.26865509 +0000 UTC m=+0.090770386 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 18:52:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:52:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:48 np0005535838 podman[254096]: 2025-11-25 23:52:48.29004473 +0000 UTC m=+0.118356107 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 25 18:52:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:50 np0005535838 podman[254123]: 2025-11-25 23:52:50.266816271 +0000 UTC m=+0.084309434 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 18:52:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v681: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:52:51 np0005535838 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:52:51 np0005535838 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4366 writes, 20K keys, 4366 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4366 writes, 458 syncs, 9.53 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, i
Nov 25 18:52:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:52:55 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:52:55 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown,
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:52:56
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'backups', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images']
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:52:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:52:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:53:01 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:53:01 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, 
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [devicehealth INFO root] Check health
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:53:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:53:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.283 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.283 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.311 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.311 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.311 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.333 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.334 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.335 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.335 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.336 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.336 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.337 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.337 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.383 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.383 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.384 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.384 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.385 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:53:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:53:04 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/876070695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:53:04 np0005535838 nova_compute[252550]: 2025-11-25 23:53:04.866 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:53:05 np0005535838 nova_compute[252550]: 2025-11-25 23:53:05.102 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 18:53:05 np0005535838 nova_compute[252550]: 2025-11-25 23:53:05.104 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5332MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 18:53:05 np0005535838 nova_compute[252550]: 2025-11-25 23:53:05.104 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:53:05 np0005535838 nova_compute[252550]: 2025-11-25 23:53:05.105 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:53:05 np0005535838 nova_compute[252550]: 2025-11-25 23:53:05.198 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 18:53:05 np0005535838 nova_compute[252550]: 2025-11-25 23:53:05.199 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 18:53:05 np0005535838 nova_compute[252550]: 2025-11-25 23:53:05.236 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:53:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:53:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:53:05 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114545605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:53:05 np0005535838 nova_compute[252550]: 2025-11-25 23:53:05.653 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:53:05 np0005535838 nova_compute[252550]: 2025-11-25 23:53:05.662 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 18:53:05 np0005535838 nova_compute[252550]: 2025-11-25 23:53:05.690 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 18:53:05 np0005535838 nova_compute[252550]: 2025-11-25 23:53:05.693 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 18:53:05 np0005535838 nova_compute[252550]: 2025-11-25 23:53:05.693 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:53:06 np0005535838 nova_compute[252550]: 2025-11-25 23:53:06.180 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:53:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:53:09 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:53:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:53:09 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:53:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:53:09 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:53:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:53:09 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:53:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:53:10 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:53:10 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev e2db07c6-6b6a-440f-92ef-9c3df3e7e447 does not exist
Nov 25 18:53:10 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 7e0dbc29-7ede-450e-a32f-5d59d1a0244e does not exist
Nov 25 18:53:10 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 61cc813f-25b2-4370-a25a-639befed7035 does not exist
Nov 25 18:53:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:53:10 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:53:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:53:10 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:53:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:53:10 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:53:10 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:53:10 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:53:10 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:53:10 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:53:10 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:53:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:53:10 np0005535838 podman[254581]: 2025-11-25 23:53:10.675507895 +0000 UTC m=+0.062682832 container create 9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:53:10 np0005535838 systemd[1]: Started libpod-conmon-9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7.scope.
Nov 25 18:53:10 np0005535838 podman[254581]: 2025-11-25 23:53:10.65379555 +0000 UTC m=+0.040970507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:53:10 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:53:10 np0005535838 podman[254581]: 2025-11-25 23:53:10.787027168 +0000 UTC m=+0.174202175 container init 9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mestorf, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 18:53:10 np0005535838 podman[254581]: 2025-11-25 23:53:10.800016592 +0000 UTC m=+0.187191559 container start 9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mestorf, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 18:53:10 np0005535838 podman[254581]: 2025-11-25 23:53:10.803872735 +0000 UTC m=+0.191047702 container attach 9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mestorf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:53:10 np0005535838 elastic_mestorf[254598]: 167 167
Nov 25 18:53:10 np0005535838 systemd[1]: libpod-9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7.scope: Deactivated successfully.
Nov 25 18:53:10 np0005535838 podman[254581]: 2025-11-25 23:53:10.808689373 +0000 UTC m=+0.195864350 container died 9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:53:10 np0005535838 systemd[1]: var-lib-containers-storage-overlay-73a833f86bad84a5b34b0569bd33ba487e3ceafeacafb9e4b08669fd13ea26f7-merged.mount: Deactivated successfully.
Nov 25 18:53:10 np0005535838 podman[254581]: 2025-11-25 23:53:10.857104635 +0000 UTC m=+0.244279612 container remove 9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 18:53:10 np0005535838 systemd[1]: libpod-conmon-9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7.scope: Deactivated successfully.
Nov 25 18:53:11 np0005535838 podman[254622]: 2025-11-25 23:53:11.049199183 +0000 UTC m=+0.049316767 container create d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:53:11 np0005535838 systemd[1]: Started libpod-conmon-d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8.scope.
Nov 25 18:53:11 np0005535838 podman[254622]: 2025-11-25 23:53:11.027575421 +0000 UTC m=+0.027693045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:53:11 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:53:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60de0da2985e1c0bc9b3fdf918e6eb1d555ea1695a3e2e1ddc845321a596cfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:53:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60de0da2985e1c0bc9b3fdf918e6eb1d555ea1695a3e2e1ddc845321a596cfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:53:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60de0da2985e1c0bc9b3fdf918e6eb1d555ea1695a3e2e1ddc845321a596cfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:53:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60de0da2985e1c0bc9b3fdf918e6eb1d555ea1695a3e2e1ddc845321a596cfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:53:11 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60de0da2985e1c0bc9b3fdf918e6eb1d555ea1695a3e2e1ddc845321a596cfb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:53:11 np0005535838 podman[254622]: 2025-11-25 23:53:11.151049921 +0000 UTC m=+0.151167515 container init d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 18:53:11 np0005535838 podman[254622]: 2025-11-25 23:53:11.167455076 +0000 UTC m=+0.167572700 container start d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:53:11 np0005535838 podman[254622]: 2025-11-25 23:53:11.170824565 +0000 UTC m=+0.170942179 container attach d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 25 18:53:12 np0005535838 podman[254658]: 2025-11-25 23:53:12.286634271 +0000 UTC m=+0.094317439 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 18:53:12 np0005535838 zen_jemison[254638]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:53:12 np0005535838 zen_jemison[254638]: --> relative data size: 1.0
Nov 25 18:53:12 np0005535838 zen_jemison[254638]: --> All data devices are unavailable
Nov 25 18:53:12 np0005535838 systemd[1]: libpod-d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8.scope: Deactivated successfully.
Nov 25 18:53:12 np0005535838 systemd[1]: libpod-d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8.scope: Consumed 1.168s CPU time.
Nov 25 18:53:12 np0005535838 podman[254622]: 2025-11-25 23:53:12.386917658 +0000 UTC m=+1.387035272 container died d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:53:12 np0005535838 systemd[1]: var-lib-containers-storage-overlay-f60de0da2985e1c0bc9b3fdf918e6eb1d555ea1695a3e2e1ddc845321a596cfb-merged.mount: Deactivated successfully.
Nov 25 18:53:12 np0005535838 podman[254622]: 2025-11-25 23:53:12.468412377 +0000 UTC m=+1.468529991 container remove d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 18:53:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:12 np0005535838 systemd[1]: libpod-conmon-d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8.scope: Deactivated successfully.
Nov 25 18:53:13 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:53:13.077 160725 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:82:13', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '36:f3:66:b7:57:d1'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 18:53:13 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:53:13.079 160725 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 18:53:13 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:53:13.080 160725 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ba84045-48af-49e3-86f7-35b32300977f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 18:53:13 np0005535838 podman[254841]: 2025-11-25 23:53:13.342683095 +0000 UTC m=+0.059872967 container create 4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:53:13 np0005535838 systemd[1]: Started libpod-conmon-4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078.scope.
Nov 25 18:53:13 np0005535838 podman[254841]: 2025-11-25 23:53:13.322524291 +0000 UTC m=+0.039714203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:53:13 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:53:13 np0005535838 podman[254841]: 2025-11-25 23:53:13.441860592 +0000 UTC m=+0.159050504 container init 4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 18:53:13 np0005535838 podman[254841]: 2025-11-25 23:53:13.453632994 +0000 UTC m=+0.170822886 container start 4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:53:13 np0005535838 podman[254841]: 2025-11-25 23:53:13.457820585 +0000 UTC m=+0.175010477 container attach 4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 18:53:13 np0005535838 systemd[1]: libpod-4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078.scope: Deactivated successfully.
Nov 25 18:53:13 np0005535838 angry_yonath[254857]: 167 167
Nov 25 18:53:13 np0005535838 conmon[254857]: conmon 4bdf816496dfb344d29d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078.scope/container/memory.events
Nov 25 18:53:13 np0005535838 podman[254841]: 2025-11-25 23:53:13.464605995 +0000 UTC m=+0.181795897 container died 4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:53:13 np0005535838 systemd[1]: var-lib-containers-storage-overlay-c0141ee6c347bf809ed57721410b50b959d4469dca2592db5bc32e243deaee10-merged.mount: Deactivated successfully.
Nov 25 18:53:13 np0005535838 podman[254841]: 2025-11-25 23:53:13.51990685 +0000 UTC m=+0.237096752 container remove 4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:53:13 np0005535838 systemd[1]: libpod-conmon-4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078.scope: Deactivated successfully.
Nov 25 18:53:13 np0005535838 podman[254882]: 2025-11-25 23:53:13.713465246 +0000 UTC m=+0.049770138 container create 0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:53:13 np0005535838 systemd[1]: Started libpod-conmon-0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0.scope.
Nov 25 18:53:13 np0005535838 podman[254882]: 2025-11-25 23:53:13.691979758 +0000 UTC m=+0.028284660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:53:13 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:53:13 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81858654bcf5f5ba045e7f68675462fb6a2d97b899d3e6f877f77982507c481/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:53:13 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81858654bcf5f5ba045e7f68675462fb6a2d97b899d3e6f877f77982507c481/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:53:13 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81858654bcf5f5ba045e7f68675462fb6a2d97b899d3e6f877f77982507c481/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:53:13 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81858654bcf5f5ba045e7f68675462fb6a2d97b899d3e6f877f77982507c481/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:53:13 np0005535838 podman[254882]: 2025-11-25 23:53:13.816832575 +0000 UTC m=+0.153137507 container init 0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:53:13 np0005535838 podman[254882]: 2025-11-25 23:53:13.823894572 +0000 UTC m=+0.160199474 container start 0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:53:13 np0005535838 podman[254882]: 2025-11-25 23:53:13.82758424 +0000 UTC m=+0.163889142 container attach 0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 25 18:53:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]: {
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:    "0": [
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:        {
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "devices": [
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "/dev/loop3"
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            ],
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_name": "ceph_lv0",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_size": "21470642176",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "name": "ceph_lv0",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "tags": {
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.cluster_name": "ceph",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.crush_device_class": "",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.encrypted": "0",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.osd_id": "0",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.type": "block",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.vdo": "0"
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            },
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "type": "block",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "vg_name": "ceph_vg0"
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:        }
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:    ],
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:    "1": [
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:        {
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "devices": [
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "/dev/loop4"
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            ],
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_name": "ceph_lv1",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_size": "21470642176",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "name": "ceph_lv1",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "tags": {
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.cluster_name": "ceph",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.crush_device_class": "",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.encrypted": "0",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.osd_id": "1",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.type": "block",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.vdo": "0"
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            },
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "type": "block",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "vg_name": "ceph_vg1"
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:        }
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:    ],
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:    "2": [
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:        {
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "devices": [
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "/dev/loop5"
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            ],
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_name": "ceph_lv2",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_size": "21470642176",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "name": "ceph_lv2",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "tags": {
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.cluster_name": "ceph",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.crush_device_class": "",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.encrypted": "0",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.osd_id": "2",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.type": "block",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:                "ceph.vdo": "0"
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            },
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "type": "block",
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:            "vg_name": "ceph_vg2"
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:        }
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]:    ]
Nov 25 18:53:14 np0005535838 pensive_antonelli[254898]: }
Nov 25 18:53:14 np0005535838 systemd[1]: libpod-0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0.scope: Deactivated successfully.
Nov 25 18:53:14 np0005535838 podman[254882]: 2025-11-25 23:53:14.582390984 +0000 UTC m=+0.918695956 container died 0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:53:14 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b81858654bcf5f5ba045e7f68675462fb6a2d97b899d3e6f877f77982507c481-merged.mount: Deactivated successfully.
Nov 25 18:53:14 np0005535838 podman[254882]: 2025-11-25 23:53:14.667670413 +0000 UTC m=+1.003975335 container remove 0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 18:53:14 np0005535838 systemd[1]: libpod-conmon-0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0.scope: Deactivated successfully.
Nov 25 18:53:15 np0005535838 podman[255058]: 2025-11-25 23:53:15.555264345 +0000 UTC m=+0.069699758 container create ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 18:53:15 np0005535838 systemd[1]: Started libpod-conmon-ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc.scope.
Nov 25 18:53:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:53:15 np0005535838 podman[255058]: 2025-11-25 23:53:15.526305487 +0000 UTC m=+0.040740970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:53:15 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:53:15 np0005535838 podman[255058]: 2025-11-25 23:53:15.654045341 +0000 UTC m=+0.168480774 container init ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:53:15 np0005535838 podman[255058]: 2025-11-25 23:53:15.665243307 +0000 UTC m=+0.179678730 container start ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 18:53:15 np0005535838 podman[255058]: 2025-11-25 23:53:15.669256594 +0000 UTC m=+0.183692047 container attach ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:53:15 np0005535838 suspicious_beaver[255074]: 167 167
Nov 25 18:53:15 np0005535838 systemd[1]: libpod-ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc.scope: Deactivated successfully.
Nov 25 18:53:15 np0005535838 podman[255058]: 2025-11-25 23:53:15.673349162 +0000 UTC m=+0.187784585 container died ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:53:15 np0005535838 systemd[1]: var-lib-containers-storage-overlay-1cc861ab4644f8d1a9b507b4616bb62bd6bdb034dbfb6c50ac5664e1fee6d50b-merged.mount: Deactivated successfully.
Nov 25 18:53:15 np0005535838 podman[255058]: 2025-11-25 23:53:15.720962353 +0000 UTC m=+0.235397776 container remove ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:53:15 np0005535838 systemd[1]: libpod-conmon-ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc.scope: Deactivated successfully.
Nov 25 18:53:15 np0005535838 podman[255097]: 2025-11-25 23:53:15.982788429 +0000 UTC m=+0.071451504 container create cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 18:53:16 np0005535838 systemd[1]: Started libpod-conmon-cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3.scope.
Nov 25 18:53:16 np0005535838 podman[255097]: 2025-11-25 23:53:15.956469132 +0000 UTC m=+0.045132267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:53:16 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:53:16 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ba34aa54dd988ce00c0703792922cc14ca2c3bee90adf3def72ff5e40a4b04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:53:16 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ba34aa54dd988ce00c0703792922cc14ca2c3bee90adf3def72ff5e40a4b04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:53:16 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ba34aa54dd988ce00c0703792922cc14ca2c3bee90adf3def72ff5e40a4b04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:53:16 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ba34aa54dd988ce00c0703792922cc14ca2c3bee90adf3def72ff5e40a4b04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:53:16 np0005535838 podman[255097]: 2025-11-25 23:53:16.101669698 +0000 UTC m=+0.190332813 container init cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 18:53:16 np0005535838 podman[255097]: 2025-11-25 23:53:16.113604314 +0000 UTC m=+0.202267399 container start cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:53:16 np0005535838 podman[255097]: 2025-11-25 23:53:16.11834961 +0000 UTC m=+0.207012745 container attach cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 18:53:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]: {
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "osd_id": 2,
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "type": "bluestore"
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:    },
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "osd_id": 1,
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "type": "bluestore"
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:    },
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "osd_id": 0,
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:        "type": "bluestore"
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]:    }
Nov 25 18:53:17 np0005535838 laughing_einstein[255113]: }
Nov 25 18:53:17 np0005535838 systemd[1]: libpod-cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3.scope: Deactivated successfully.
Nov 25 18:53:17 np0005535838 podman[255097]: 2025-11-25 23:53:17.206800012 +0000 UTC m=+1.295463087 container died cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:53:17 np0005535838 systemd[1]: libpod-cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3.scope: Consumed 1.100s CPU time.
Nov 25 18:53:17 np0005535838 systemd[1]: var-lib-containers-storage-overlay-d0ba34aa54dd988ce00c0703792922cc14ca2c3bee90adf3def72ff5e40a4b04-merged.mount: Deactivated successfully.
Nov 25 18:53:17 np0005535838 podman[255097]: 2025-11-25 23:53:17.284542421 +0000 UTC m=+1.373205506 container remove cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 18:53:17 np0005535838 systemd[1]: libpod-conmon-cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3.scope: Deactivated successfully.
Nov 25 18:53:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:53:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:53:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:53:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:53:17 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev bf195f5f-0f37-46b0-a695-6f451d27bf18 does not exist
Nov 25 18:53:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:53:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2398708724' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:53:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:53:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2398708724' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:53:18 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:53:18 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:53:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:19 np0005535838 podman[255209]: 2025-11-25 23:53:19.301489978 +0000 UTC m=+0.131677448 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 18:53:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:53:21 np0005535838 podman[255235]: 2025-11-25 23:53:21.237112762 +0000 UTC m=+0.058630204 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:53:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:53:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:53:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:53:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:53:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:53:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:53:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:53:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:53:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:53:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v704: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:53:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:53:40.762 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:53:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:53:40.762 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:53:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:53:40.763 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:53:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:43 np0005535838 podman[255257]: 2025-11-25 23:53:43.27301952 +0000 UTC m=+0.094469983 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:53:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:53:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:50 np0005535838 podman[255278]: 2025-11-25 23:53:50.326218222 +0000 UTC m=+0.143945074 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 18:53:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v711: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:53:52 np0005535838 podman[255306]: 2025-11-25 23:53:52.258609258 +0000 UTC m=+0.080657027 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 18:53:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:53:56
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'images', 'volumes', 'cephfs.cephfs.meta', 'vms', '.mgr']
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:53:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:53:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v716: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:54:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:54:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:02 np0005535838 nova_compute[252550]: 2025-11-25 23:54:02.818 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:54:02 np0005535838 nova_compute[252550]: 2025-11-25 23:54:02.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:54:02 np0005535838 nova_compute[252550]: 2025-11-25 23:54:02.863 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:54:02 np0005535838 nova_compute[252550]: 2025-11-25 23:54:02.864 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:54:02 np0005535838 nova_compute[252550]: 2025-11-25 23:54:02.864 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:54:02 np0005535838 nova_compute[252550]: 2025-11-25 23:54:02.864 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 18:54:02 np0005535838 nova_compute[252550]: 2025-11-25 23:54:02.865 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:54:03 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:54:03 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3445300293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:54:03 np0005535838 nova_compute[252550]: 2025-11-25 23:54:03.299 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:54:03 np0005535838 nova_compute[252550]: 2025-11-25 23:54:03.534 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 18:54:03 np0005535838 nova_compute[252550]: 2025-11-25 23:54:03.535 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5308MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 18:54:03 np0005535838 nova_compute[252550]: 2025-11-25 23:54:03.536 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:54:03 np0005535838 nova_compute[252550]: 2025-11-25 23:54:03.536 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:54:04 np0005535838 nova_compute[252550]: 2025-11-25 23:54:04.115 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 18:54:04 np0005535838 nova_compute[252550]: 2025-11-25 23:54:04.115 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 18:54:04 np0005535838 nova_compute[252550]: 2025-11-25 23:54:04.133 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:54:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:54:04 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944120377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:54:04 np0005535838 nova_compute[252550]: 2025-11-25 23:54:04.586 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:54:04 np0005535838 nova_compute[252550]: 2025-11-25 23:54:04.595 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 18:54:04 np0005535838 nova_compute[252550]: 2025-11-25 23:54:04.637 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 18:54:04 np0005535838 nova_compute[252550]: 2025-11-25 23:54:04.640 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 18:54:04 np0005535838 nova_compute[252550]: 2025-11-25 23:54:04.640 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:54:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:54:05 np0005535838 nova_compute[252550]: 2025-11-25 23:54:05.642 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:54:05 np0005535838 nova_compute[252550]: 2025-11-25 23:54:05.642 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 18:54:05 np0005535838 nova_compute[252550]: 2025-11-25 23:54:05.642 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 18:54:05 np0005535838 nova_compute[252550]: 2025-11-25 23:54:05.656 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 18:54:05 np0005535838 nova_compute[252550]: 2025-11-25 23:54:05.656 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:54:05 np0005535838 nova_compute[252550]: 2025-11-25 23:54:05.656 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:54:05 np0005535838 nova_compute[252550]: 2025-11-25 23:54:05.656 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:54:05 np0005535838 nova_compute[252550]: 2025-11-25 23:54:05.656 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:54:05 np0005535838 nova_compute[252550]: 2025-11-25 23:54:05.656 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 18:54:05 np0005535838 nova_compute[252550]: 2025-11-25 23:54:05.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:54:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:06 np0005535838 nova_compute[252550]: 2025-11-25 23:54:06.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:54:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v720: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:54:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v722: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:14 np0005535838 podman[255369]: 2025-11-25 23:54:14.260265685 +0000 UTC m=+0.083651260 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 18:54:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v723: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:54:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:54:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4023673141' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:54:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:54:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4023673141' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:54:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v725: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:54:18 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 7bcc1c24-e996-4266-8ad2-a87459473886 does not exist
Nov 25 18:54:18 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev e202ec38-7e46-4e37-a606-3a3d0d8861cf does not exist
Nov 25 18:54:18 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev cd7fc8a0-648a-496c-bb06-a2d34f04365e does not exist
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:54:18 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:54:19 np0005535838 podman[255663]: 2025-11-25 23:54:19.536629293 +0000 UTC m=+0.059430561 container create 8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:54:19 np0005535838 systemd[1]: Started libpod-conmon-8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b.scope.
Nov 25 18:54:19 np0005535838 podman[255663]: 2025-11-25 23:54:19.503225929 +0000 UTC m=+0.026027277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:54:19 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:54:19 np0005535838 podman[255663]: 2025-11-25 23:54:19.643713138 +0000 UTC m=+0.166514496 container init 8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:54:19 np0005535838 podman[255663]: 2025-11-25 23:54:19.65723631 +0000 UTC m=+0.180037608 container start 8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 18:54:19 np0005535838 podman[255663]: 2025-11-25 23:54:19.66134159 +0000 UTC m=+0.184142958 container attach 8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 18:54:19 np0005535838 practical_stonebraker[255680]: 167 167
Nov 25 18:54:19 np0005535838 systemd[1]: libpod-8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b.scope: Deactivated successfully.
Nov 25 18:54:19 np0005535838 conmon[255680]: conmon 8b497cd4b0b6b94630a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b.scope/container/memory.events
Nov 25 18:54:19 np0005535838 podman[255663]: 2025-11-25 23:54:19.669244302 +0000 UTC m=+0.192045660 container died 8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:54:19 np0005535838 systemd[1]: var-lib-containers-storage-overlay-ed1052129f83b34ef5b7a94a2a15d0f237df7d6085d9a120574d072e43bdb51f-merged.mount: Deactivated successfully.
Nov 25 18:54:19 np0005535838 podman[255663]: 2025-11-25 23:54:19.730408558 +0000 UTC m=+0.253209866 container remove 8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 18:54:19 np0005535838 systemd[1]: libpod-conmon-8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b.scope: Deactivated successfully.
Nov 25 18:54:19 np0005535838 podman[255703]: 2025-11-25 23:54:19.976939055 +0000 UTC m=+0.069709675 container create 4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_vaughan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 18:54:20 np0005535838 systemd[1]: Started libpod-conmon-4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c.scope.
Nov 25 18:54:20 np0005535838 podman[255703]: 2025-11-25 23:54:19.946778429 +0000 UTC m=+0.039549099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:54:20 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:54:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed708a7ae0bd6f1942ae663295fdc7744a8f7bc665dea44fea176c00bc653fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:54:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed708a7ae0bd6f1942ae663295fdc7744a8f7bc665dea44fea176c00bc653fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:54:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed708a7ae0bd6f1942ae663295fdc7744a8f7bc665dea44fea176c00bc653fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:54:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed708a7ae0bd6f1942ae663295fdc7744a8f7bc665dea44fea176c00bc653fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:54:20 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed708a7ae0bd6f1942ae663295fdc7744a8f7bc665dea44fea176c00bc653fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:54:20 np0005535838 podman[255703]: 2025-11-25 23:54:20.09705367 +0000 UTC m=+0.189824310 container init 4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_vaughan, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 18:54:20 np0005535838 podman[255703]: 2025-11-25 23:54:20.111129576 +0000 UTC m=+0.203900196 container start 4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:54:20 np0005535838 podman[255703]: 2025-11-25 23:54:20.115588977 +0000 UTC m=+0.208359607 container attach 4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_vaughan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 18:54:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v726: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:54:21 np0005535838 mystifying_vaughan[255719]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:54:21 np0005535838 mystifying_vaughan[255719]: --> relative data size: 1.0
Nov 25 18:54:21 np0005535838 mystifying_vaughan[255719]: --> All data devices are unavailable
Nov 25 18:54:21 np0005535838 systemd[1]: libpod-4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c.scope: Deactivated successfully.
Nov 25 18:54:21 np0005535838 podman[255703]: 2025-11-25 23:54:21.287612291 +0000 UTC m=+1.380382881 container died 4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 18:54:21 np0005535838 systemd[1]: libpod-4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c.scope: Consumed 1.128s CPU time.
Nov 25 18:54:21 np0005535838 systemd[1]: var-lib-containers-storage-overlay-9ed708a7ae0bd6f1942ae663295fdc7744a8f7bc665dea44fea176c00bc653fb-merged.mount: Deactivated successfully.
Nov 25 18:54:21 np0005535838 podman[255703]: 2025-11-25 23:54:21.382301434 +0000 UTC m=+1.475072064 container remove 4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:54:21 np0005535838 podman[255744]: 2025-11-25 23:54:21.387588416 +0000 UTC m=+0.204696069 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 18:54:21 np0005535838 systemd[1]: libpod-conmon-4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c.scope: Deactivated successfully.
Nov 25 18:54:22 np0005535838 podman[255926]: 2025-11-25 23:54:22.146129455 +0000 UTC m=+0.046569138 container create 88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:54:22 np0005535838 systemd[1]: Started libpod-conmon-88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8.scope.
Nov 25 18:54:22 np0005535838 podman[255926]: 2025-11-25 23:54:22.124915647 +0000 UTC m=+0.025355320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:54:22 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:54:22 np0005535838 podman[255926]: 2025-11-25 23:54:22.253563069 +0000 UTC m=+0.154002782 container init 88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:54:22 np0005535838 podman[255926]: 2025-11-25 23:54:22.260776623 +0000 UTC m=+0.161216276 container start 88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 18:54:22 np0005535838 podman[255926]: 2025-11-25 23:54:22.264386989 +0000 UTC m=+0.164826702 container attach 88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 18:54:22 np0005535838 vigorous_chandrasekhar[255943]: 167 167
Nov 25 18:54:22 np0005535838 systemd[1]: libpod-88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8.scope: Deactivated successfully.
Nov 25 18:54:22 np0005535838 podman[255926]: 2025-11-25 23:54:22.266565518 +0000 UTC m=+0.167005171 container died 88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:54:22 np0005535838 systemd[1]: var-lib-containers-storage-overlay-55d9935370e809b7bfe317c6161d60159ffd1eea305d2a3231512683daf5a00e-merged.mount: Deactivated successfully.
Nov 25 18:54:22 np0005535838 podman[255926]: 2025-11-25 23:54:22.307692188 +0000 UTC m=+0.208131831 container remove 88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:54:22 np0005535838 systemd[1]: libpod-conmon-88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8.scope: Deactivated successfully.
Nov 25 18:54:22 np0005535838 podman[255956]: 2025-11-25 23:54:22.392655352 +0000 UTC m=+0.068372471 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 25 18:54:22 np0005535838 podman[255988]: 2025-11-25 23:54:22.484189961 +0000 UTC m=+0.049722611 container create 66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 18:54:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:22 np0005535838 systemd[1]: Started libpod-conmon-66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d.scope.
Nov 25 18:54:22 np0005535838 podman[255988]: 2025-11-25 23:54:22.457841886 +0000 UTC m=+0.023374566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:54:22 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:54:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a2d5da11f2f3cd80db120a39e029e9e31ac539ef1b81473121f7f7a776ce94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:54:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a2d5da11f2f3cd80db120a39e029e9e31ac539ef1b81473121f7f7a776ce94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:54:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a2d5da11f2f3cd80db120a39e029e9e31ac539ef1b81473121f7f7a776ce94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:54:22 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a2d5da11f2f3cd80db120a39e029e9e31ac539ef1b81473121f7f7a776ce94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:54:22 np0005535838 podman[255988]: 2025-11-25 23:54:22.588810611 +0000 UTC m=+0.154343301 container init 66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euclid, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:54:22 np0005535838 podman[255988]: 2025-11-25 23:54:22.603491523 +0000 UTC m=+0.169024213 container start 66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 18:54:22 np0005535838 podman[255988]: 2025-11-25 23:54:22.607498891 +0000 UTC m=+0.173031591 container attach 66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euclid, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 18:54:23 np0005535838 loving_euclid[256005]: {
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:    "0": [
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:        {
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "devices": [
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "/dev/loop3"
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            ],
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_name": "ceph_lv0",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_size": "21470642176",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "name": "ceph_lv0",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "tags": {
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.cluster_name": "ceph",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.crush_device_class": "",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.encrypted": "0",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.osd_id": "0",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.type": "block",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.vdo": "0"
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            },
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "type": "block",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "vg_name": "ceph_vg0"
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:        }
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:    ],
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:    "1": [
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:        {
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "devices": [
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "/dev/loop4"
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            ],
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_name": "ceph_lv1",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_size": "21470642176",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "name": "ceph_lv1",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "tags": {
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.cluster_name": "ceph",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.crush_device_class": "",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.encrypted": "0",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.osd_id": "1",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.type": "block",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.vdo": "0"
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            },
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "type": "block",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "vg_name": "ceph_vg1"
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:        }
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:    ],
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:    "2": [
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:        {
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "devices": [
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "/dev/loop5"
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            ],
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_name": "ceph_lv2",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_size": "21470642176",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "name": "ceph_lv2",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "tags": {
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.cluster_name": "ceph",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.crush_device_class": "",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.encrypted": "0",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.osd_id": "2",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.type": "block",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:                "ceph.vdo": "0"
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            },
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "type": "block",
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:            "vg_name": "ceph_vg2"
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:        }
Nov 25 18:54:23 np0005535838 loving_euclid[256005]:    ]
Nov 25 18:54:23 np0005535838 loving_euclid[256005]: }
Nov 25 18:54:23 np0005535838 systemd[1]: libpod-66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d.scope: Deactivated successfully.
Nov 25 18:54:23 np0005535838 podman[255988]: 2025-11-25 23:54:23.379727147 +0000 UTC m=+0.945259817 container died 66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 18:54:23 np0005535838 systemd[1]: var-lib-containers-storage-overlay-52a2d5da11f2f3cd80db120a39e029e9e31ac539ef1b81473121f7f7a776ce94-merged.mount: Deactivated successfully.
Nov 25 18:54:23 np0005535838 podman[255988]: 2025-11-25 23:54:23.43068683 +0000 UTC m=+0.996219490 container remove 66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:54:23 np0005535838 systemd[1]: libpod-conmon-66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d.scope: Deactivated successfully.
Nov 25 18:54:24 np0005535838 podman[256164]: 2025-11-25 23:54:24.065078656 +0000 UTC m=+0.050690517 container create df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 18:54:24 np0005535838 systemd[1]: Started libpod-conmon-df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace.scope.
Nov 25 18:54:24 np0005535838 podman[256164]: 2025-11-25 23:54:24.038329611 +0000 UTC m=+0.023941522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:54:24 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:54:24 np0005535838 podman[256164]: 2025-11-25 23:54:24.155574499 +0000 UTC m=+0.141186370 container init df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 18:54:24 np0005535838 podman[256164]: 2025-11-25 23:54:24.165428812 +0000 UTC m=+0.151040683 container start df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 18:54:24 np0005535838 podman[256164]: 2025-11-25 23:54:24.169864161 +0000 UTC m=+0.155476032 container attach df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 18:54:24 np0005535838 pedantic_gagarin[256180]: 167 167
Nov 25 18:54:24 np0005535838 systemd[1]: libpod-df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace.scope: Deactivated successfully.
Nov 25 18:54:24 np0005535838 conmon[256180]: conmon df1ec64940f1182489d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace.scope/container/memory.events
Nov 25 18:54:24 np0005535838 podman[256164]: 2025-11-25 23:54:24.175932653 +0000 UTC m=+0.161544484 container died df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:54:24 np0005535838 systemd[1]: var-lib-containers-storage-overlay-0acc40e5fe6d09075b61532ea2fc196c48ce7722ffe3c0321c0c3b38de5180f4-merged.mount: Deactivated successfully.
Nov 25 18:54:24 np0005535838 podman[256164]: 2025-11-25 23:54:24.216402916 +0000 UTC m=+0.202014747 container remove df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:54:24 np0005535838 systemd[1]: libpod-conmon-df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace.scope: Deactivated successfully.
Nov 25 18:54:24 np0005535838 podman[256204]: 2025-11-25 23:54:24.41158387 +0000 UTC m=+0.052898257 container create 2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:54:24 np0005535838 systemd[1]: Started libpod-conmon-2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2.scope.
Nov 25 18:54:24 np0005535838 podman[256204]: 2025-11-25 23:54:24.389631372 +0000 UTC m=+0.030945809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:54:24 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:54:24 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86908d6d3cba6317c257beb6864283ed727bc0e1202a00fb8252ce93ff94c648/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:54:24 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86908d6d3cba6317c257beb6864283ed727bc0e1202a00fb8252ce93ff94c648/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:54:24 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86908d6d3cba6317c257beb6864283ed727bc0e1202a00fb8252ce93ff94c648/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:54:24 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86908d6d3cba6317c257beb6864283ed727bc0e1202a00fb8252ce93ff94c648/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:54:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:24 np0005535838 podman[256204]: 2025-11-25 23:54:24.530906713 +0000 UTC m=+0.172221130 container init 2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:54:24 np0005535838 podman[256204]: 2025-11-25 23:54:24.53942222 +0000 UTC m=+0.180736597 container start 2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:54:24 np0005535838 podman[256204]: 2025-11-25 23:54:24.542695288 +0000 UTC m=+0.184009685 container attach 2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 18:54:25 np0005535838 cool_bartik[256221]: {
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "osd_id": 2,
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "type": "bluestore"
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:    },
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "osd_id": 1,
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "type": "bluestore"
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:    },
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "osd_id": 0,
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:        "type": "bluestore"
Nov 25 18:54:25 np0005535838 cool_bartik[256221]:    }
Nov 25 18:54:25 np0005535838 cool_bartik[256221]: }
Nov 25 18:54:25 np0005535838 systemd[1]: libpod-2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2.scope: Deactivated successfully.
Nov 25 18:54:25 np0005535838 podman[256204]: 2025-11-25 23:54:25.479675602 +0000 UTC m=+1.120989989 container died 2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 18:54:25 np0005535838 systemd[1]: var-lib-containers-storage-overlay-86908d6d3cba6317c257beb6864283ed727bc0e1202a00fb8252ce93ff94c648-merged.mount: Deactivated successfully.
Nov 25 18:54:25 np0005535838 podman[256204]: 2025-11-25 23:54:25.557460703 +0000 UTC m=+1.198775080 container remove 2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 18:54:25 np0005535838 systemd[1]: libpod-conmon-2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2.scope: Deactivated successfully.
Nov 25 18:54:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:54:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:54:25 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:54:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:54:25 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:54:25 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 9fb0b7c0-acd8-4f3a-8c53-e7cebf3f8697 does not exist
Nov 25 18:54:25 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:54:25 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:54:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:54:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:54:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:54:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:54:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:54:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:54:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v729: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v730: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:54:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v733: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:54:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v734: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v735: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v736: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:54:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:54:40.763 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:54:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:54:40.764 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:54:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:54:40.764 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:54:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 25 18:54:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 25 18:54:41 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 25 18:54:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v738: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:42 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 25 18:54:42 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 25 18:54:42 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 25 18:54:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v740: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:54:44 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 25 18:54:44 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 25 18:54:44 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 25 18:54:45 np0005535838 podman[256318]: 2025-11-25 23:54:45.290062009 +0000 UTC m=+0.111041023 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 18:54:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:54:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v742: 177 pgs: 177 active+clean; 16 MiB data, 97 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.7 MiB/s wr, 31 op/s
Nov 25 18:54:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v743: 177 pgs: 177 active+clean; 21 MiB data, 102 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.1 MiB/s wr, 29 op/s
Nov 25 18:54:48 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 25 18:54:49 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 25 18:54:49 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 25 18:54:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v745: 177 pgs: 177 active+clean; 21 MiB data, 102 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.7 MiB/s wr, 25 op/s
Nov 25 18:54:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:54:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 25 18:54:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 25 18:54:50 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 25 18:54:52 np0005535838 podman[256341]: 2025-11-25 23:54:52.345441966 +0000 UTC m=+0.169662541 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 25 18:54:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v747: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.4 MiB/s wr, 50 op/s
Nov 25 18:54:53 np0005535838 podman[256369]: 2025-11-25 23:54:53.252982733 +0000 UTC m=+0.074177686 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 18:54:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v748: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 3.1 MiB/s wr, 24 op/s
Nov 25 18:54:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:54:56
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['vms', 'backups', 'images', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:54:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v749: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Nov 25 18:54:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v750: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.2 MiB/s wr, 20 op/s
Nov 25 18:55:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v751: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.689987) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114900690024, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1584, "num_deletes": 251, "total_data_size": 1722541, "memory_usage": 1754272, "flush_reason": "Manual Compaction"}
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114900704018, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1679012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14139, "largest_seqno": 15722, "table_properties": {"data_size": 1671642, "index_size": 4381, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15020, "raw_average_key_size": 19, "raw_value_size": 1656768, "raw_average_value_size": 2185, "num_data_blocks": 201, "num_entries": 758, "num_filter_entries": 758, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764114736, "oldest_key_time": 1764114736, "file_creation_time": 1764114900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 14101 microseconds, and 7832 cpu microseconds.
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.704084) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1679012 bytes OK
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.704109) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.705851) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.705879) EVENT_LOG_v1 {"time_micros": 1764114900705870, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.705906) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1715654, prev total WAL file size 1715654, number of live WAL files 2.
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.706949) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1639KB)], [35(5145KB)]
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114900706994, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 6947904, "oldest_snapshot_seqno": -1}
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3529 keys, 5759352 bytes, temperature: kUnknown
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114900746972, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 5759352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5732687, "index_size": 16763, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8837, "raw_key_size": 83358, "raw_average_key_size": 23, "raw_value_size": 5666168, "raw_average_value_size": 1605, "num_data_blocks": 722, "num_entries": 3529, "num_filter_entries": 3529, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764114900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.747371) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 5759352 bytes
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.748896) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.3 rd, 143.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 5.0 +0.0 blob) out(5.5 +0.0 blob), read-write-amplify(7.6) write-amplify(3.4) OK, records in: 4047, records dropped: 518 output_compression: NoCompression
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.748926) EVENT_LOG_v1 {"time_micros": 1764114900748911, "job": 16, "event": "compaction_finished", "compaction_time_micros": 40096, "compaction_time_cpu_micros": 23614, "output_level": 6, "num_output_files": 1, "total_output_size": 5759352, "num_input_records": 4047, "num_output_records": 3529, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114900749776, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114900751663, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.706822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.751763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.751769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.751772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.751773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:55:00 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.751775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:55:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:55:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v752: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Nov 25 18:55:02 np0005535838 nova_compute[252550]: 2025-11-25 23:55:02.818 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:55:03 np0005535838 nova_compute[252550]: 2025-11-25 23:55:03.816 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:55:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v753: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:55:04 np0005535838 nova_compute[252550]: 2025-11-25 23:55:04.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:55:04 np0005535838 nova_compute[252550]: 2025-11-25 23:55:04.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:55:04 np0005535838 nova_compute[252550]: 2025-11-25 23:55:04.955 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:55:04 np0005535838 nova_compute[252550]: 2025-11-25 23:55:04.955 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:55:04 np0005535838 nova_compute[252550]: 2025-11-25 23:55:04.955 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:55:04 np0005535838 nova_compute[252550]: 2025-11-25 23:55:04.956 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 18:55:04 np0005535838 nova_compute[252550]: 2025-11-25 23:55:04.956 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:55:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:55:05 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3745899408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:55:05 np0005535838 nova_compute[252550]: 2025-11-25 23:55:05.409 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:55:05 np0005535838 nova_compute[252550]: 2025-11-25 23:55:05.594 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 18:55:05 np0005535838 nova_compute[252550]: 2025-11-25 23:55:05.595 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5298MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 18:55:05 np0005535838 nova_compute[252550]: 2025-11-25 23:55:05.595 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:55:05 np0005535838 nova_compute[252550]: 2025-11-25 23:55:05.595 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:55:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:55:05 np0005535838 nova_compute[252550]: 2025-11-25 23:55:05.667 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 18:55:05 np0005535838 nova_compute[252550]: 2025-11-25 23:55:05.667 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 18:55:05 np0005535838 nova_compute[252550]: 2025-11-25 23:55:05.682 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:55:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:55:06 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3783575913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:55:06 np0005535838 nova_compute[252550]: 2025-11-25 23:55:06.103 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:55:06 np0005535838 nova_compute[252550]: 2025-11-25 23:55:06.109 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 18:55:06 np0005535838 nova_compute[252550]: 2025-11-25 23:55:06.172 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 18:55:06 np0005535838 nova_compute[252550]: 2025-11-25 23:55:06.174 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 18:55:06 np0005535838 nova_compute[252550]: 2025-11-25 23:55:06.174 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:55:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v754: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:55:07 np0005535838 nova_compute[252550]: 2025-11-25 23:55:07.174 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:55:07 np0005535838 nova_compute[252550]: 2025-11-25 23:55:07.175 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 18:55:07 np0005535838 nova_compute[252550]: 2025-11-25 23:55:07.175 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 18:55:07 np0005535838 nova_compute[252550]: 2025-11-25 23:55:07.197 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 18:55:07 np0005535838 nova_compute[252550]: 2025-11-25 23:55:07.198 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:55:07 np0005535838 nova_compute[252550]: 2025-11-25 23:55:07.199 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:55:07 np0005535838 nova_compute[252550]: 2025-11-25 23:55:07.199 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:55:07 np0005535838 nova_compute[252550]: 2025-11-25 23:55:07.201 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:55:07 np0005535838 nova_compute[252550]: 2025-11-25 23:55:07.201 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 18:55:07 np0005535838 nova_compute[252550]: 2025-11-25 23:55:07.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:55:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v755: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:55:09 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:55:09.605 160725 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:82:13', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '36:f3:66:b7:57:d1'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 18:55:09 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:55:09.606 160725 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 18:55:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v756: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:55:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:55:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v757: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:55:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v758: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:55:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:55:16 np0005535838 podman[256432]: 2025-11-25 23:55:16.263014791 +0000 UTC m=+0.083078425 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 18:55:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v759: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:55:16 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:55:16.609 160725 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ba84045-48af-49e3-86f7-35b32300977f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 18:55:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:55:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3709224588' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:55:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:55:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3709224588' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:55:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v760: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:55:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v761: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:55:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:55:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:55:23 np0005535838 podman[256452]: 2025-11-25 23:55:23.312405087 +0000 UTC m=+0.133066162 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 18:55:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 25 18:55:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 25 18:55:23 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 25 18:55:23 np0005535838 podman[256478]: 2025-11-25 23:55:23.437256279 +0000 UTC m=+0.089167208 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true)
Nov 25 18:55:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v764: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:55:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:55:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:55:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:55:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:55:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:55:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:55:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:55:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v765: 177 pgs: 177 active+clean; 65 MiB data, 130 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.4 MiB/s wr, 16 op/s
Nov 25 18:55:26 np0005535838 podman[256669]: 2025-11-25 23:55:26.75450209 +0000 UTC m=+0.118510933 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 18:55:26 np0005535838 podman[256669]: 2025-11-25 23:55:26.851017772 +0000 UTC m=+0.215026595 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 18:55:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:55:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:55:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:55:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:55:28 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 3535cbed-4b46-4a93-8c61-82797458b847 does not exist
Nov 25 18:55:28 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 49e16c39-4f57-42b9-8d82-18d4ce3db8d7 does not exist
Nov 25 18:55:28 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 8dc95a44-b238-42ec-a1c9-003c9675db29 does not exist
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:55:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v766: 177 pgs: 177 active+clean; 81 MiB data, 146 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 MiB/s wr, 38 op/s
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:55:28 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:55:29 np0005535838 podman[257074]: 2025-11-25 23:55:29.2447731 +0000 UTC m=+0.069610583 container create 068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:55:29 np0005535838 systemd[1]: Started libpod-conmon-068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75.scope.
Nov 25 18:55:29 np0005535838 podman[257074]: 2025-11-25 23:55:29.219484313 +0000 UTC m=+0.044321856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:55:29 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:55:29 np0005535838 podman[257074]: 2025-11-25 23:55:29.340115362 +0000 UTC m=+0.164952865 container init 068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:55:29 np0005535838 podman[257074]: 2025-11-25 23:55:29.351448165 +0000 UTC m=+0.176285648 container start 068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:55:29 np0005535838 podman[257074]: 2025-11-25 23:55:29.355030871 +0000 UTC m=+0.179868364 container attach 068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:55:29 np0005535838 festive_proskuriakova[257091]: 167 167
Nov 25 18:55:29 np0005535838 systemd[1]: libpod-068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75.scope: Deactivated successfully.
Nov 25 18:55:29 np0005535838 podman[257074]: 2025-11-25 23:55:29.359388427 +0000 UTC m=+0.184225920 container died 068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:55:29 np0005535838 systemd[1]: var-lib-containers-storage-overlay-4bfb86f8928dd0530e6f57e5f355013a9259d89e7795247990c5ccf64d07b952-merged.mount: Deactivated successfully.
Nov 25 18:55:29 np0005535838 podman[257074]: 2025-11-25 23:55:29.41816725 +0000 UTC m=+0.243004733 container remove 068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:55:29 np0005535838 systemd[1]: libpod-conmon-068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75.scope: Deactivated successfully.
Nov 25 18:55:29 np0005535838 podman[257115]: 2025-11-25 23:55:29.680276454 +0000 UTC m=+0.065852753 container create 8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:55:29 np0005535838 systemd[1]: Started libpod-conmon-8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c.scope.
Nov 25 18:55:29 np0005535838 podman[257115]: 2025-11-25 23:55:29.655614705 +0000 UTC m=+0.041191074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:55:29 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:55:29 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf72c123acbbb7372ccbeb0c4ae38ee06b8b26d09dde7d2180c739bbefab3d82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:55:29 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf72c123acbbb7372ccbeb0c4ae38ee06b8b26d09dde7d2180c739bbefab3d82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:55:29 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf72c123acbbb7372ccbeb0c4ae38ee06b8b26d09dde7d2180c739bbefab3d82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:55:29 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf72c123acbbb7372ccbeb0c4ae38ee06b8b26d09dde7d2180c739bbefab3d82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:55:29 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf72c123acbbb7372ccbeb0c4ae38ee06b8b26d09dde7d2180c739bbefab3d82/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:55:29 np0005535838 podman[257115]: 2025-11-25 23:55:29.777835995 +0000 UTC m=+0.163412324 container init 8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 18:55:29 np0005535838 podman[257115]: 2025-11-25 23:55:29.793023812 +0000 UTC m=+0.178600141 container start 8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:55:29 np0005535838 podman[257115]: 2025-11-25 23:55:29.79707528 +0000 UTC m=+0.182651569 container attach 8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:55:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 25 18:55:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 25 18:55:30 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 25 18:55:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v769: 177 pgs: 177 active+clean; 81 MiB data, 146 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 5.6 MiB/s wr, 54 op/s
Nov 25 18:55:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:55:30 np0005535838 sad_morse[257132]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:55:30 np0005535838 sad_morse[257132]: --> relative data size: 1.0
Nov 25 18:55:30 np0005535838 sad_morse[257132]: --> All data devices are unavailable
Nov 25 18:55:30 np0005535838 systemd[1]: libpod-8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c.scope: Deactivated successfully.
Nov 25 18:55:30 np0005535838 podman[257115]: 2025-11-25 23:55:30.875755686 +0000 UTC m=+1.261332045 container died 8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 18:55:30 np0005535838 systemd[1]: libpod-8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c.scope: Consumed 1.036s CPU time.
Nov 25 18:55:30 np0005535838 systemd[1]: var-lib-containers-storage-overlay-cf72c123acbbb7372ccbeb0c4ae38ee06b8b26d09dde7d2180c739bbefab3d82-merged.mount: Deactivated successfully.
Nov 25 18:55:30 np0005535838 podman[257115]: 2025-11-25 23:55:30.930778969 +0000 UTC m=+1.316355258 container remove 8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 18:55:30 np0005535838 systemd[1]: libpod-conmon-8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c.scope: Deactivated successfully.
Nov 25 18:55:31 np0005535838 podman[257315]: 2025-11-25 23:55:31.706165408 +0000 UTC m=+0.062422631 container create dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rhodes, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:55:31 np0005535838 systemd[1]: Started libpod-conmon-dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952.scope.
Nov 25 18:55:31 np0005535838 podman[257315]: 2025-11-25 23:55:31.68007157 +0000 UTC m=+0.036328844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:55:31 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:55:31 np0005535838 podman[257315]: 2025-11-25 23:55:31.798323905 +0000 UTC m=+0.154581168 container init dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 18:55:31 np0005535838 podman[257315]: 2025-11-25 23:55:31.808640121 +0000 UTC m=+0.164897304 container start dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:55:31 np0005535838 podman[257315]: 2025-11-25 23:55:31.811399945 +0000 UTC m=+0.167657168 container attach dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rhodes, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 18:55:31 np0005535838 thirsty_rhodes[257331]: 167 167
Nov 25 18:55:31 np0005535838 systemd[1]: libpod-dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952.scope: Deactivated successfully.
Nov 25 18:55:31 np0005535838 podman[257315]: 2025-11-25 23:55:31.81607612 +0000 UTC m=+0.172333333 container died dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rhodes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:55:31 np0005535838 systemd[1]: var-lib-containers-storage-overlay-610bcf078433a5ff4a6e0128982712f94017d0a3d30e715f8a5a6d7f55664b91-merged.mount: Deactivated successfully.
Nov 25 18:55:31 np0005535838 podman[257315]: 2025-11-25 23:55:31.862137993 +0000 UTC m=+0.218395206 container remove dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:55:31 np0005535838 systemd[1]: libpod-conmon-dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952.scope: Deactivated successfully.
Nov 25 18:55:32 np0005535838 podman[257357]: 2025-11-25 23:55:32.11538108 +0000 UTC m=+0.067578320 container create 489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:55:32 np0005535838 systemd[1]: Started libpod-conmon-489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1.scope.
Nov 25 18:55:32 np0005535838 podman[257357]: 2025-11-25 23:55:32.086983939 +0000 UTC m=+0.039181219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:55:32 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:55:32 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07b9f4939c6ce145de44f49a6fa7a738e70026ab9708c252c4bd3f9a80133a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:55:32 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07b9f4939c6ce145de44f49a6fa7a738e70026ab9708c252c4bd3f9a80133a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:55:32 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07b9f4939c6ce145de44f49a6fa7a738e70026ab9708c252c4bd3f9a80133a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:55:32 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07b9f4939c6ce145de44f49a6fa7a738e70026ab9708c252c4bd3f9a80133a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:55:32 np0005535838 podman[257357]: 2025-11-25 23:55:32.222392873 +0000 UTC m=+0.174590163 container init 489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:55:32 np0005535838 podman[257357]: 2025-11-25 23:55:32.232980937 +0000 UTC m=+0.185178167 container start 489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:55:32 np0005535838 podman[257357]: 2025-11-25 23:55:32.237373764 +0000 UTC m=+0.189570994 container attach 489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:55:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v770: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 14 MiB/s wr, 94 op/s
Nov 25 18:55:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 25 18:55:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 25 18:55:32 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 25 18:55:32 np0005535838 competent_bouman[257373]: {
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:    "0": [
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:        {
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "devices": [
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "/dev/loop3"
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            ],
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_name": "ceph_lv0",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_size": "21470642176",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "name": "ceph_lv0",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "tags": {
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.cluster_name": "ceph",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.crush_device_class": "",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.encrypted": "0",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.osd_id": "0",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.type": "block",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.vdo": "0"
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            },
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "type": "block",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "vg_name": "ceph_vg0"
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:        }
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:    ],
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:    "1": [
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:        {
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "devices": [
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "/dev/loop4"
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            ],
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_name": "ceph_lv1",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_size": "21470642176",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "name": "ceph_lv1",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "tags": {
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.cluster_name": "ceph",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.crush_device_class": "",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.encrypted": "0",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.osd_id": "1",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.type": "block",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.vdo": "0"
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            },
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "type": "block",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "vg_name": "ceph_vg1"
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:        }
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:    ],
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:    "2": [
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:        {
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "devices": [
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "/dev/loop5"
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            ],
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_name": "ceph_lv2",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_size": "21470642176",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "name": "ceph_lv2",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "tags": {
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.cluster_name": "ceph",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.crush_device_class": "",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.encrypted": "0",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.osd_id": "2",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.type": "block",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:                "ceph.vdo": "0"
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            },
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "type": "block",
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:            "vg_name": "ceph_vg2"
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:        }
Nov 25 18:55:32 np0005535838 competent_bouman[257373]:    ]
Nov 25 18:55:32 np0005535838 competent_bouman[257373]: }
Nov 25 18:55:33 np0005535838 systemd[1]: libpod-489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1.scope: Deactivated successfully.
Nov 25 18:55:33 np0005535838 podman[257357]: 2025-11-25 23:55:33.001300517 +0000 UTC m=+0.953497727 container died 489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:55:33 np0005535838 systemd[1]: var-lib-containers-storage-overlay-c07b9f4939c6ce145de44f49a6fa7a738e70026ab9708c252c4bd3f9a80133a5-merged.mount: Deactivated successfully.
Nov 25 18:55:33 np0005535838 podman[257357]: 2025-11-25 23:55:33.067293022 +0000 UTC m=+1.019490262 container remove 489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:55:33 np0005535838 systemd[1]: libpod-conmon-489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1.scope: Deactivated successfully.
Nov 25 18:55:33 np0005535838 podman[257540]: 2025-11-25 23:55:33.872585003 +0000 UTC m=+0.050720819 container create cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 18:55:33 np0005535838 systemd[1]: Started libpod-conmon-cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff.scope.
Nov 25 18:55:33 np0005535838 podman[257540]: 2025-11-25 23:55:33.848378885 +0000 UTC m=+0.026514751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:55:33 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:55:33 np0005535838 podman[257540]: 2025-11-25 23:55:33.971394787 +0000 UTC m=+0.149530593 container init cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 18:55:33 np0005535838 podman[257540]: 2025-11-25 23:55:33.982564656 +0000 UTC m=+0.160700452 container start cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:55:33 np0005535838 podman[257540]: 2025-11-25 23:55:33.985923836 +0000 UTC m=+0.164059662 container attach cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 18:55:33 np0005535838 tender_colden[257557]: 167 167
Nov 25 18:55:33 np0005535838 systemd[1]: libpod-cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff.scope: Deactivated successfully.
Nov 25 18:55:33 np0005535838 podman[257540]: 2025-11-25 23:55:33.991089834 +0000 UTC m=+0.169225650 container died cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 18:55:34 np0005535838 systemd[1]: var-lib-containers-storage-overlay-2ae8a80c5d1fea38673994d8ab86ee5e502347a4581e1a13eb653102f9b5b6a4-merged.mount: Deactivated successfully.
Nov 25 18:55:34 np0005535838 podman[257540]: 2025-11-25 23:55:34.029570034 +0000 UTC m=+0.207705820 container remove cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:55:34 np0005535838 systemd[1]: libpod-conmon-cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff.scope: Deactivated successfully.
Nov 25 18:55:34 np0005535838 podman[257580]: 2025-11-25 23:55:34.228882598 +0000 UTC m=+0.058471696 container create d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 18:55:34 np0005535838 systemd[1]: Started libpod-conmon-d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1.scope.
Nov 25 18:55:34 np0005535838 podman[257580]: 2025-11-25 23:55:34.201088454 +0000 UTC m=+0.030677592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:55:34 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:55:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c26599e0651c0519b63224edd334872ca9ee7144281376dbff40f4ce796a8df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:55:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c26599e0651c0519b63224edd334872ca9ee7144281376dbff40f4ce796a8df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:55:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c26599e0651c0519b63224edd334872ca9ee7144281376dbff40f4ce796a8df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:55:34 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c26599e0651c0519b63224edd334872ca9ee7144281376dbff40f4ce796a8df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:55:34 np0005535838 podman[257580]: 2025-11-25 23:55:34.342086157 +0000 UTC m=+0.171675275 container init d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_snyder, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:55:34 np0005535838 podman[257580]: 2025-11-25 23:55:34.359884414 +0000 UTC m=+0.189473492 container start d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:55:34 np0005535838 podman[257580]: 2025-11-25 23:55:34.36571261 +0000 UTC m=+0.195301678 container attach d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_snyder, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 18:55:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v772: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 12 MiB/s wr, 60 op/s
Nov 25 18:55:34 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 25 18:55:34 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 25 18:55:34 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]: {
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "osd_id": 2,
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "type": "bluestore"
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:    },
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "osd_id": 1,
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "type": "bluestore"
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:    },
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "osd_id": 0,
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:        "type": "bluestore"
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]:    }
Nov 25 18:55:35 np0005535838 amazing_snyder[257596]: }
Nov 25 18:55:35 np0005535838 systemd[1]: libpod-d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1.scope: Deactivated successfully.
Nov 25 18:55:35 np0005535838 systemd[1]: libpod-d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1.scope: Consumed 1.083s CPU time.
Nov 25 18:55:35 np0005535838 podman[257580]: 2025-11-25 23:55:35.445363622 +0000 UTC m=+1.274952720 container died d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 25 18:55:35 np0005535838 systemd[1]: var-lib-containers-storage-overlay-7c26599e0651c0519b63224edd334872ca9ee7144281376dbff40f4ce796a8df-merged.mount: Deactivated successfully.
Nov 25 18:55:35 np0005535838 podman[257580]: 2025-11-25 23:55:35.524995133 +0000 UTC m=+1.354584191 container remove d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_snyder, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:55:35 np0005535838 systemd[1]: libpod-conmon-d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1.scope: Deactivated successfully.
Nov 25 18:55:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:55:35 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:55:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:55:35 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:55:35 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev dd1bb234-b4a6-48f0-9f40-badc8287b381 does not exist
Nov 25 18:55:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:55:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 25 18:55:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 25 18:55:35 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 25 18:55:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v775: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 12 MiB/s wr, 108 op/s
Nov 25 18:55:36 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:55:36 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:55:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 25 18:55:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 25 18:55:37 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 25 18:55:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v777: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 5.5 KiB/s wr, 89 op/s
Nov 25 18:55:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 25 18:55:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 25 18:55:39 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 25 18:55:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v779: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 5.5 KiB/s wr, 90 op/s
Nov 25 18:55:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:55:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 25 18:55:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 25 18:55:40 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 25 18:55:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:55:40.765 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:55:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:55:40.766 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:55:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:55:40.767 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:55:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v781: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 177 KiB/s rd, 17 KiB/s wr, 244 op/s
Nov 25 18:55:43 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 25 18:55:43 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 25 18:55:43 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 25 18:55:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v783: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 15 KiB/s wr, 202 op/s
Nov 25 18:55:44 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 25 18:55:44 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 25 18:55:44 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 25 18:55:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:55:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 25 18:55:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 25 18:55:45 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 25 18:55:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v786: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 160 KiB/s rd, 17 KiB/s wr, 225 op/s
Nov 25 18:55:47 np0005535838 podman[257692]: 2025-11-25 23:55:47.263654786 +0000 UTC m=+0.080068153 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:55:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v787: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.5 KiB/s wr, 41 op/s
Nov 25 18:55:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v788: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.2 KiB/s wr, 36 op/s
Nov 25 18:55:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:55:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 25 18:55:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 25 18:55:50 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 25 18:55:51 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 25 18:55:51 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 25 18:55:51 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 25 18:55:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v791: 177 pgs: 177 active+clean; 41 MiB data, 131 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 5.8 KiB/s wr, 73 op/s
Nov 25 18:55:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 25 18:55:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 25 18:55:52 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 25 18:55:53 np0005535838 nova_compute[252550]: 2025-11-25 23:55:53.395 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:55:53 np0005535838 nova_compute[252550]: 2025-11-25 23:55:53.396 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:55:53 np0005535838 nova_compute[252550]: 2025-11-25 23:55:53.484 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 18:55:53 np0005535838 nova_compute[252550]: 2025-11-25 23:55:53.633 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:55:53 np0005535838 nova_compute[252550]: 2025-11-25 23:55:53.634 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:55:53 np0005535838 nova_compute[252550]: 2025-11-25 23:55:53.646 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 18:55:53 np0005535838 nova_compute[252550]: 2025-11-25 23:55:53.647 252558 INFO nova.compute.claims [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 18:55:53 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 25 18:55:53 np0005535838 nova_compute[252550]: 2025-11-25 23:55:53.780 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:55:53 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 25 18:55:53 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 25 18:55:54 np0005535838 podman[257733]: 2025-11-25 23:55:54.216901841 +0000 UTC m=+0.047082852 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 18:55:54 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:55:54 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2613194685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:55:54 np0005535838 podman[257732]: 2025-11-25 23:55:54.245134496 +0000 UTC m=+0.076220070 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 18:55:54 np0005535838 nova_compute[252550]: 2025-11-25 23:55:54.253 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:55:54 np0005535838 nova_compute[252550]: 2025-11-25 23:55:54.260 252558 DEBUG nova.compute.provider_tree [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 18:55:54 np0005535838 nova_compute[252550]: 2025-11-25 23:55:54.275 252558 DEBUG nova.scheduler.client.report [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 18:55:54 np0005535838 nova_compute[252550]: 2025-11-25 23:55:54.294 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:55:54 np0005535838 nova_compute[252550]: 2025-11-25 23:55:54.295 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 18:55:54 np0005535838 nova_compute[252550]: 2025-11-25 23:55:54.334 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 18:55:54 np0005535838 nova_compute[252550]: 2025-11-25 23:55:54.335 252558 DEBUG nova.network.neutron [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 18:55:54 np0005535838 nova_compute[252550]: 2025-11-25 23:55:54.391 252558 INFO nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 18:55:54 np0005535838 nova_compute[252550]: 2025-11-25 23:55:54.484 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 18:55:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v794: 177 pgs: 177 active+clean; 41 MiB data, 131 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 6.2 KiB/s wr, 63 op/s
Nov 25 18:55:54 np0005535838 nova_compute[252550]: 2025-11-25 23:55:54.664 252558 INFO nova.virt.block_device [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Booting with volume 6a6b9d67-6cf8-4dcc-abf1-e7df17195818 at /dev/vda#033[00m
Nov 25 18:55:54 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 25 18:55:54 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 25 18:55:54 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.036 252558 DEBUG os_brick.utils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.037 252558 INFO oslo.privsep.daemon [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpj8bitk30/privsep.sock']#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.370 252558 DEBUG nova.network.neutron [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.370 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 18:55:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.761 252558 INFO oslo.privsep.daemon [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.639 257781 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.643 257781 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.644 257781 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.645 257781 INFO oslo.privsep.daemon [-] privsep daemon running as pid 257781#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.767 257781 DEBUG oslo.privsep.daemon [-] privsep: reply[4996de3e-0789-477d-8653-1710a7be7b56]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 18:55:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 25 18:55:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 25 18:55:55 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.885 257781 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.899 257781 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.899 257781 DEBUG oslo.privsep.daemon [-] privsep: reply[859ca9d5-fb4b-40ba-827d-c8bd6d70cb84]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.901 257781 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.911 257781 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.911 257781 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb366ec-5c72-4bf5-a2d7-ce2f4753e53d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:eb1ba11079b3', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.914 257781 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.926 257781 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.926 257781 DEBUG oslo.privsep.daemon [-] privsep: reply[502dc846-31c8-421e-a921-27bc260a094b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.928 257781 DEBUG oslo.privsep.daemon [-] privsep: reply[8546ee9e-d96f-46da-a89a-80b26c06591c]: (4, '99edd01f-cb88-4b88-a56d-15f374f9d1d0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.929 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.952 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.957 252558 DEBUG os_brick.initiator.connectors.lightos [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.958 252558 DEBUG os_brick.initiator.connectors.lightos [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.958 252558 DEBUG os_brick.initiator.connectors.lightos [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.959 252558 DEBUG os_brick.utils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] <== get_connector_properties: return (922ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:eb1ba11079b3', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '99edd01f-cb88-4b88-a56d-15f374f9d1d0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 25 18:55:55 np0005535838 nova_compute[252550]: 2025-11-25 23:55:55.960 252558 DEBUG nova.virt.block_device [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Updating existing volume attachment record: 7875ce81-3fea-4dc5-9323-881b03756e90 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:55:56
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'backups', '.mgr', 'images', 'cephfs.cephfs.meta', 'vms']
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:55:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v797: 177 pgs: 177 active+clean; 41 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 13 KiB/s wr, 91 op/s
Nov 25 18:55:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 18:55:56 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1576037613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.312 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.315 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.316 252558 INFO nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Creating image(s)#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.317 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.318 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Ensure instance console log exists: /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.318 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.319 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.319 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.323 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '7875ce81-3fea-4dc5-9323-881b03756e90', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6a6b9d67-6cf8-4dcc-abf1-e7df17195818', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6a6b9d67-6cf8-4dcc-abf1-e7df17195818', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91', 'attached_at': '', 'detached_at': '', 'volume_id': '6a6b9d67-6cf8-4dcc-abf1-e7df17195818', 'serial': '6a6b9d67-6cf8-4dcc-abf1-e7df17195818'}, 'boot_index': 0, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.330 252558 WARNING nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.338 252558 DEBUG nova.virt.libvirt.host [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.339 252558 DEBUG nova.virt.libvirt.host [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.342 252558 DEBUG nova.virt.libvirt.host [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.343 252558 DEBUG nova.virt.libvirt.host [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.344 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.344 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T23:54:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='897d55b7-f73b-41fe-b70f-d9aa95d4456d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.345 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.346 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.346 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.347 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.347 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.347 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.348 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.348 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.349 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.350 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.390 252558 DEBUG nova.storage.rbd_utils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.398 252558 DEBUG nova.privsep.utils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.398 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:55:57 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 18:55:57 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1342532240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.874 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.876 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.876 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.877 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:55:57 np0005535838 systemd[1]: Starting libvirt secret daemon...
Nov 25 18:55:57 np0005535838 systemd[1]: Started libvirt secret daemon.
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.946 252558 DEBUG nova.objects.instance [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lazy-loading 'pci_devices' on Instance uuid bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 18:55:57 np0005535838 nova_compute[252550]: 2025-11-25 23:55:57.959 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] End _get_guest_xml xml=<domain type="kvm">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  <uuid>bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91</uuid>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  <name>instance-00000001</name>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  <memory>131072</memory>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  <vcpu>1</vcpu>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  <metadata>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <nova:name>instance-depend-image</nova:name>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <nova:creationTime>2025-11-25 23:55:57</nova:creationTime>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <nova:flavor name="m1.nano">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:        <nova:memory>128</nova:memory>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:        <nova:disk>1</nova:disk>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:        <nova:swap>0</nova:swap>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:        <nova:vcpus>1</nova:vcpus>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      </nova:flavor>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <nova:owner>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:        <nova:user uuid="210f8faea4e1416ab82c35b428209415">tempest-ImageDependencyTests-795400484-project-member</nova:user>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:        <nova:project uuid="cda2ac0afb334f238d6d956454314f3d">tempest-ImageDependencyTests-795400484</nova:project>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      </nova:owner>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <nova:ports/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    </nova:instance>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  </metadata>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  <sysinfo type="smbios">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <system>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <entry name="manufacturer">RDO</entry>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <entry name="product">OpenStack Compute</entry>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <entry name="serial">bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91</entry>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <entry name="uuid">bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91</entry>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <entry name="family">Virtual Machine</entry>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    </system>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  </sysinfo>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  <os>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <boot dev="hd"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <smbios mode="sysinfo"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  </os>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  <features>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <acpi/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <apic/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <vmcoreinfo/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  </features>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  <clock offset="utc">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <timer name="hpet" present="no"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  </clock>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  <cpu mode="host-model" match="exact">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  </cpu>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  <devices>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <disk type="network" device="cdrom">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <driver type="raw" cache="none"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <source protocol="rbd" name="vms/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_disk.config">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:        <host name="192.168.122.100" port="6789"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      </source>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <auth username="openstack">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:        <secret type="ceph" uuid="101922db-575f-58e2-980f-928050464f69"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      </auth>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <target dev="sda" bus="sata"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    </disk>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <disk type="network" device="disk">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <source protocol="rbd" name="volumes/volume-6a6b9d67-6cf8-4dcc-abf1-e7df17195818">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:        <host name="192.168.122.100" port="6789"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      </source>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <auth username="openstack">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:        <secret type="ceph" uuid="101922db-575f-58e2-980f-928050464f69"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      </auth>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <target dev="vda" bus="virtio"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <serial>6a6b9d67-6cf8-4dcc-abf1-e7df17195818</serial>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    </disk>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <serial type="pty">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <log file="/var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/console.log" append="off"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    </serial>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <video>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <model type="virtio"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    </video>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <input type="tablet" bus="usb"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <rng model="virtio">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <backend model="random">/dev/urandom</backend>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    </rng>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <controller type="usb" index="0"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    <memballoon model="virtio">
Nov 25 18:55:57 np0005535838 nova_compute[252550]:      <stats period="10"/>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:    </memballoon>
Nov 25 18:55:57 np0005535838 nova_compute[252550]:  </devices>
Nov 25 18:55:57 np0005535838 nova_compute[252550]: </domain>
Nov 25 18:55:57 np0005535838 nova_compute[252550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 18:55:58 np0005535838 nova_compute[252550]: 2025-11-25 23:55:58.017 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 18:55:58 np0005535838 nova_compute[252550]: 2025-11-25 23:55:58.017 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 18:55:58 np0005535838 nova_compute[252550]: 2025-11-25 23:55:58.018 252558 INFO nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Using config drive#033[00m
Nov 25 18:55:58 np0005535838 nova_compute[252550]: 2025-11-25 23:55:58.051 252558 DEBUG nova.storage.rbd_utils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 18:55:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v798: 177 pgs: 177 active+clean; 41 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 15 KiB/s wr, 112 op/s
Nov 25 18:55:58 np0005535838 nova_compute[252550]: 2025-11-25 23:55:58.634 252558 INFO nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Creating config drive at /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/disk.config#033[00m
Nov 25 18:55:58 np0005535838 nova_compute[252550]: 2025-11-25 23:55:58.643 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzmixopyw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:55:58 np0005535838 nova_compute[252550]: 2025-11-25 23:55:58.794 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzmixopyw" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:55:58 np0005535838 nova_compute[252550]: 2025-11-25 23:55:58.818 252558 DEBUG nova.storage.rbd_utils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 18:55:58 np0005535838 nova_compute[252550]: 2025-11-25 23:55:58.821 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/disk.config bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:55:59 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 25 18:55:59 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 25 18:55:59 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 25 18:56:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v800: 177 pgs: 177 active+clean; 41 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 14 KiB/s wr, 107 op/s
Nov 25 18:56:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:56:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 25 18:56:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 25 18:56:00 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.031 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/disk.config bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.032 252558 INFO nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Deleting local config drive /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/disk.config because it was imported into RBD.#033[00m
Nov 25 18:56:01 np0005535838 systemd-machined[213892]: New machine qemu-1-instance-00000001.
Nov 25 18:56:01 np0005535838 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.660 252558 DEBUG nova.virt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Emitting event <LifecycleEvent: 1764114961.6597042, bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.661 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] VM Resumed (Lifecycle Event)#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.665 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.666 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.671 252558 INFO nova.virt.libvirt.driver [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Instance spawned successfully.#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.672 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.714 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.721 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.776 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.776 252558 DEBUG nova.virt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Emitting event <LifecycleEvent: 1764114961.6651616, bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.777 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] VM Started (Lifecycle Event)#033[00m
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.796 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667638841407827 of space, bias 1.0, pg target 0.2002916524223481 quantized to 32 (current 32)
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:56:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.807 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.807 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.808 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.808 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.809 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.810 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.816 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.855 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 18:56:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.880 252558 INFO nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Took 4.57 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.883 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 18:56:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 25 18:56:01 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.971 252558 INFO nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Took 8.39 seconds to build instance.#033[00m
Nov 25 18:56:01 np0005535838 nova_compute[252550]: 2025-11-25 23:56:01.993 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:56:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v803: 177 pgs: 177 active+clean; 42 MiB data, 166 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 37 KiB/s wr, 184 op/s
Nov 25 18:56:02 np0005535838 nova_compute[252550]: 2025-11-25 23:56:02.823 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:56:02 np0005535838 nova_compute[252550]: 2025-11-25 23:56:02.824 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 18:56:02 np0005535838 nova_compute[252550]: 2025-11-25 23:56:02.854 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 18:56:02 np0005535838 nova_compute[252550]: 2025-11-25 23:56:02.855 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:56:02 np0005535838 nova_compute[252550]: 2025-11-25 23:56:02.855 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 18:56:02 np0005535838 nova_compute[252550]: 2025-11-25 23:56:02.874 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:56:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 25 18:56:02 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 25 18:56:02 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 25 18:56:03 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 25 18:56:03 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 25 18:56:03 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 25 18:56:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v806: 177 pgs: 177 active+clean; 42 MiB data, 166 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 48 KiB/s wr, 206 op/s
Nov 25 18:56:04 np0005535838 nova_compute[252550]: 2025-11-25 23:56:04.883 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:56:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 25 18:56:04 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 25 18:56:04 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 25 18:56:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:56:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 25 18:56:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 25 18:56:05 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 25 18:56:05 np0005535838 nova_compute[252550]: 2025-11-25 23:56:05.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:56:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v809: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 8.0 KiB/s wr, 190 op/s
Nov 25 18:56:06 np0005535838 nova_compute[252550]: 2025-11-25 23:56:06.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:56:06 np0005535838 nova_compute[252550]: 2025-11-25 23:56:06.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:56:06 np0005535838 nova_compute[252550]: 2025-11-25 23:56:06.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:56:06 np0005535838 nova_compute[252550]: 2025-11-25 23:56:06.823 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 18:56:06 np0005535838 nova_compute[252550]: 2025-11-25 23:56:06.823 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:56:06 np0005535838 nova_compute[252550]: 2025-11-25 23:56:06.855 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:56:06 np0005535838 nova_compute[252550]: 2025-11-25 23:56:06.855 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:56:06 np0005535838 nova_compute[252550]: 2025-11-25 23:56:06.856 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:56:06 np0005535838 nova_compute[252550]: 2025-11-25 23:56:06.856 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 18:56:06 np0005535838 nova_compute[252550]: 2025-11-25 23:56:06.856 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:56:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 25 18:56:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 25 18:56:06 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 25 18:56:07 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:56:07 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3650025285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:56:07 np0005535838 nova_compute[252550]: 2025-11-25 23:56:07.341 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:56:07 np0005535838 nova_compute[252550]: 2025-11-25 23:56:07.420 252558 DEBUG nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 18:56:07 np0005535838 nova_compute[252550]: 2025-11-25 23:56:07.421 252558 DEBUG nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 18:56:07 np0005535838 nova_compute[252550]: 2025-11-25 23:56:07.664 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 18:56:07 np0005535838 nova_compute[252550]: 2025-11-25 23:56:07.666 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5220MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 18:56:07 np0005535838 nova_compute[252550]: 2025-11-25 23:56:07.666 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:56:07 np0005535838 nova_compute[252550]: 2025-11-25 23:56:07.667 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:56:07 np0005535838 nova_compute[252550]: 2025-11-25 23:56:07.964 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Instance bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 18:56:07 np0005535838 nova_compute[252550]: 2025-11-25 23:56:07.965 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 18:56:07 np0005535838 nova_compute[252550]: 2025-11-25 23:56:07.965 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 18:56:08 np0005535838 nova_compute[252550]: 2025-11-25 23:56:08.026 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing inventories for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 18:56:08 np0005535838 nova_compute[252550]: 2025-11-25 23:56:08.118 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Updating ProviderTree inventory for provider 08547965-b35f-4b7b-95d8-902f06aa011c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 18:56:08 np0005535838 nova_compute[252550]: 2025-11-25 23:56:08.119 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Updating inventory in ProviderTree for provider 08547965-b35f-4b7b-95d8-902f06aa011c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 18:56:08 np0005535838 nova_compute[252550]: 2025-11-25 23:56:08.151 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing aggregate associations for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 18:56:08 np0005535838 nova_compute[252550]: 2025-11-25 23:56:08.179 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing trait associations for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 18:56:08 np0005535838 nova_compute[252550]: 2025-11-25 23:56:08.214 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:56:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v811: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 13 KiB/s wr, 289 op/s
Nov 25 18:56:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:56:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/844459817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:56:08 np0005535838 nova_compute[252550]: 2025-11-25 23:56:08.644 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:56:08 np0005535838 nova_compute[252550]: 2025-11-25 23:56:08.650 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 18:56:08 np0005535838 nova_compute[252550]: 2025-11-25 23:56:08.670 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 18:56:08 np0005535838 nova_compute[252550]: 2025-11-25 23:56:08.711 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 18:56:08 np0005535838 nova_compute[252550]: 2025-11-25 23:56:08.711 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:56:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 25 18:56:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 25 18:56:08 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 25 18:56:09 np0005535838 nova_compute[252550]: 2025-11-25 23:56:09.711 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:56:09 np0005535838 nova_compute[252550]: 2025-11-25 23:56:09.712 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 18:56:09 np0005535838 nova_compute[252550]: 2025-11-25 23:56:09.712 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 18:56:10 np0005535838 nova_compute[252550]: 2025-11-25 23:56:10.233 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "refresh_cache-bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 18:56:10 np0005535838 nova_compute[252550]: 2025-11-25 23:56:10.233 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquired lock "refresh_cache-bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 18:56:10 np0005535838 nova_compute[252550]: 2025-11-25 23:56:10.234 252558 DEBUG nova.network.neutron [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 18:56:10 np0005535838 nova_compute[252550]: 2025-11-25 23:56:10.234 252558 DEBUG nova.objects.instance [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 18:56:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v813: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 11 KiB/s wr, 238 op/s
Nov 25 18:56:10 np0005535838 nova_compute[252550]: 2025-11-25 23:56:10.629 252558 DEBUG nova.network.neutron [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 18:56:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:56:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 25 18:56:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 25 18:56:10 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 25 18:56:10 np0005535838 nova_compute[252550]: 2025-11-25 23:56:10.961 252558 DEBUG nova.network.neutron [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 18:56:10 np0005535838 nova_compute[252550]: 2025-11-25 23:56:10.983 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Releasing lock "refresh_cache-bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 18:56:10 np0005535838 nova_compute[252550]: 2025-11-25 23:56:10.983 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 18:56:10 np0005535838 nova_compute[252550]: 2025-11-25 23:56:10.983 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:56:10 np0005535838 nova_compute[252550]: 2025-11-25 23:56:10.984 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:56:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v815: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 9.7 KiB/s wr, 178 op/s
Nov 25 18:56:12 np0005535838 nova_compute[252550]: 2025-11-25 23:56:12.613 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "861debf8-73c8-45fe-92d9-fbfa772d34eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:56:12 np0005535838 nova_compute[252550]: 2025-11-25 23:56:12.614 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "861debf8-73c8-45fe-92d9-fbfa772d34eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:56:12 np0005535838 nova_compute[252550]: 2025-11-25 23:56:12.629 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 18:56:12 np0005535838 nova_compute[252550]: 2025-11-25 23:56:12.699 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:56:12 np0005535838 nova_compute[252550]: 2025-11-25 23:56:12.699 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:56:12 np0005535838 nova_compute[252550]: 2025-11-25 23:56:12.724 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 18:56:12 np0005535838 nova_compute[252550]: 2025-11-25 23:56:12.724 252558 INFO nova.compute.claims [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 18:56:12 np0005535838 nova_compute[252550]: 2025-11-25 23:56:12.830 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:56:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:56:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2865924001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.267 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.274 252558 DEBUG nova.compute.provider_tree [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.292 252558 DEBUG nova.scheduler.client.report [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.321 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.324 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.397 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.398 252558 DEBUG nova.network.neutron [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.428 252558 INFO nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.451 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.564 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.565 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.566 252558 INFO nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Creating image(s)#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.601 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.636 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.669 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.674 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "6aa298c67176a6f202556fea602ab4a4483a8f4b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.676 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "6aa298c67176a6f202556fea602ab4a4483a8f4b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.969 252558 DEBUG nova.network.neutron [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 18:56:13 np0005535838 nova_compute[252550]: 2025-11-25 23:56:13.969 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.032 252558 DEBUG nova.virt.libvirt.imagebackend [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Image locations are: [{'url': 'rbd://101922db-575f-58e2-980f-928050464f69/images/16d8485c-e81e-455c-b234-ffc2513a8236/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://101922db-575f-58e2-980f-928050464f69/images/16d8485c-e81e-455c-b234-ffc2513a8236/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.103 252558 DEBUG nova.virt.libvirt.imagebackend [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Selected location: {'url': 'rbd://101922db-575f-58e2-980f-928050464f69/images/16d8485c-e81e-455c-b234-ffc2513a8236/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.104 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] cloning images/16d8485c-e81e-455c-b234-ffc2513a8236@snap to None/861debf8-73c8-45fe-92d9-fbfa772d34eb_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.238 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "6aa298c67176a6f202556fea602ab4a4483a8f4b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:56:14 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:56:14.242 160725 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:82:13', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '36:f3:66:b7:57:d1'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 18:56:14 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:56:14.243 160725 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.406 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] resizing rbd image 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.502 252558 DEBUG nova.objects.instance [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lazy-loading 'migration_context' on Instance uuid 861debf8-73c8-45fe-92d9-fbfa772d34eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.515 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.516 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Ensure instance console log exists: /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.516 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.517 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.517 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.520 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='df9355b10df7ff91027aeb7f3322435e',container_format='bare',created_at=2025-11-25T23:56:10Z,direct_url=<?>,disk_format='raw',id=16d8485c-e81e-455c-b234-ffc2513a8236,min_disk=0,min_ram=0,name='tempest-image-dependency-test-319602011',owner='cda2ac0afb334f238d6d956454314f3d',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-25T23:56:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '16d8485c-e81e-455c-b234-ffc2513a8236'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.525 252558 WARNING nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.530 252558 DEBUG nova.virt.libvirt.host [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.531 252558 DEBUG nova.virt.libvirt.host [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.534 252558 DEBUG nova.virt.libvirt.host [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.535 252558 DEBUG nova.virt.libvirt.host [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.536 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.536 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T23:54:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='897d55b7-f73b-41fe-b70f-d9aa95d4456d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='df9355b10df7ff91027aeb7f3322435e',container_format='bare',created_at=2025-11-25T23:56:10Z,direct_url=<?>,disk_format='raw',id=16d8485c-e81e-455c-b234-ffc2513a8236,min_disk=0,min_ram=0,name='tempest-image-dependency-test-319602011',owner='cda2ac0afb334f238d6d956454314f3d',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-25T23:56:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.537 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.537 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.538 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.538 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.538 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.539 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.539 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.540 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.540 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.541 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 18:56:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v816: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 7.6 KiB/s wr, 141 op/s
Nov 25 18:56:14 np0005535838 nova_compute[252550]: 2025-11-25 23:56:14.545 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:56:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 18:56:15 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2107091497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 18:56:15 np0005535838 nova_compute[252550]: 2025-11-25 23:56:15.050 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:56:15 np0005535838 nova_compute[252550]: 2025-11-25 23:56:15.073 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 18:56:15 np0005535838 nova_compute[252550]: 2025-11-25 23:56:15.077 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:56:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 18:56:15 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2135860034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 18:56:15 np0005535838 nova_compute[252550]: 2025-11-25 23:56:15.526 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:56:15 np0005535838 nova_compute[252550]: 2025-11-25 23:56:15.528 252558 DEBUG nova.objects.instance [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lazy-loading 'pci_devices' on Instance uuid 861debf8-73c8-45fe-92d9-fbfa772d34eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 18:56:15 np0005535838 nova_compute[252550]: 2025-11-25 23:56:15.590 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] End _get_guest_xml xml=<domain type="kvm">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  <uuid>861debf8-73c8-45fe-92d9-fbfa772d34eb</uuid>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  <name>instance-00000002</name>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  <memory>131072</memory>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  <vcpu>1</vcpu>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  <metadata>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <nova:name>instance-depend-image</nova:name>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <nova:creationTime>2025-11-25 23:56:14</nova:creationTime>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <nova:flavor name="m1.nano">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:        <nova:memory>128</nova:memory>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:        <nova:disk>1</nova:disk>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:        <nova:swap>0</nova:swap>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:        <nova:vcpus>1</nova:vcpus>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      </nova:flavor>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <nova:owner>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:        <nova:user uuid="210f8faea4e1416ab82c35b428209415">tempest-ImageDependencyTests-795400484-project-member</nova:user>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:        <nova:project uuid="cda2ac0afb334f238d6d956454314f3d">tempest-ImageDependencyTests-795400484</nova:project>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      </nova:owner>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <nova:root type="image" uuid="16d8485c-e81e-455c-b234-ffc2513a8236"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <nova:ports/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    </nova:instance>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  </metadata>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  <sysinfo type="smbios">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <system>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <entry name="manufacturer">RDO</entry>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <entry name="product">OpenStack Compute</entry>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <entry name="serial">861debf8-73c8-45fe-92d9-fbfa772d34eb</entry>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <entry name="uuid">861debf8-73c8-45fe-92d9-fbfa772d34eb</entry>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <entry name="family">Virtual Machine</entry>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    </system>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  </sysinfo>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  <os>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <boot dev="hd"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <smbios mode="sysinfo"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  </os>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  <features>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <acpi/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <apic/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <vmcoreinfo/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  </features>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  <clock offset="utc">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <timer name="hpet" present="no"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  </clock>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  <cpu mode="host-model" match="exact">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  </cpu>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  <devices>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <disk type="network" device="disk">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <driver type="raw" cache="none"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <source protocol="rbd" name="vms/861debf8-73c8-45fe-92d9-fbfa772d34eb_disk">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:        <host name="192.168.122.100" port="6789"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      </source>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <auth username="openstack">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:        <secret type="ceph" uuid="101922db-575f-58e2-980f-928050464f69"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      </auth>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <target dev="vda" bus="virtio"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    </disk>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <disk type="network" device="cdrom">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <driver type="raw" cache="none"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <source protocol="rbd" name="vms/861debf8-73c8-45fe-92d9-fbfa772d34eb_disk.config">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:        <host name="192.168.122.100" port="6789"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      </source>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <auth username="openstack">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:        <secret type="ceph" uuid="101922db-575f-58e2-980f-928050464f69"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      </auth>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <target dev="sda" bus="sata"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    </disk>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <serial type="pty">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <log file="/var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/console.log" append="off"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    </serial>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <video>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <model type="virtio"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    </video>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <input type="tablet" bus="usb"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <rng model="virtio">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <backend model="random">/dev/urandom</backend>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    </rng>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <controller type="usb" index="0"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    <memballoon model="virtio">
Nov 25 18:56:15 np0005535838 nova_compute[252550]:      <stats period="10"/>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:    </memballoon>
Nov 25 18:56:15 np0005535838 nova_compute[252550]:  </devices>
Nov 25 18:56:15 np0005535838 nova_compute[252550]: </domain>
Nov 25 18:56:15 np0005535838 nova_compute[252550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 18:56:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:56:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 25 18:56:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 25 18:56:15 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 25 18:56:15 np0005535838 nova_compute[252550]: 2025-11-25 23:56:15.768 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 18:56:15 np0005535838 nova_compute[252550]: 2025-11-25 23:56:15.768 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 18:56:15 np0005535838 nova_compute[252550]: 2025-11-25 23:56:15.769 252558 INFO nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Using config drive#033[00m
Nov 25 18:56:15 np0005535838 nova_compute[252550]: 2025-11-25 23:56:15.806 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 18:56:16 np0005535838 nova_compute[252550]: 2025-11-25 23:56:16.027 252558 INFO nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Creating config drive at /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/disk.config#033[00m
Nov 25 18:56:16 np0005535838 nova_compute[252550]: 2025-11-25 23:56:16.036 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpueh_vrze execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:56:16 np0005535838 nova_compute[252550]: 2025-11-25 23:56:16.177 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpueh_vrze" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:56:16 np0005535838 nova_compute[252550]: 2025-11-25 23:56:16.215 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 18:56:16 np0005535838 nova_compute[252550]: 2025-11-25 23:56:16.220 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/disk.config 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:56:16 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:56:16.245 160725 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ba84045-48af-49e3-86f7-35b32300977f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 18:56:16 np0005535838 nova_compute[252550]: 2025-11-25 23:56:16.422 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/disk.config 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:56:16 np0005535838 nova_compute[252550]: 2025-11-25 23:56:16.423 252558 INFO nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Deleting local config drive /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/disk.config because it was imported into RBD.#033[00m
Nov 25 18:56:16 np0005535838 systemd-machined[213892]: New machine qemu-2-instance-00000002.
Nov 25 18:56:16 np0005535838 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 25 18:56:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v818: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 5.3 KiB/s wr, 113 op/s
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.075 252558 DEBUG nova.virt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Emitting event <LifecycleEvent: 1764114977.0745485, 861debf8-73c8-45fe-92d9-fbfa772d34eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.076 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] VM Resumed (Lifecycle Event)#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.079 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.079 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.082 252558 INFO nova.virt.libvirt.driver [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Instance spawned successfully.#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.083 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.306 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.312 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.464 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.465 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.466 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.467 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.468 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.469 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.489 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.490 252558 DEBUG nova.virt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Emitting event <LifecycleEvent: 1764114977.0760424, 861debf8-73c8-45fe-92d9-fbfa772d34eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.491 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] VM Started (Lifecycle Event)#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.658 252558 INFO nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Took 4.09 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.659 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.676 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.678 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 18:56:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:56:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4100619018' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:56:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:56:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4100619018' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.751 252558 INFO nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Took 5.07 seconds to build instance.#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.786 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 18:56:17 np0005535838 nova_compute[252550]: 2025-11-25 23:56:17.911 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "861debf8-73c8-45fe-92d9-fbfa772d34eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:56:18 np0005535838 podman[258405]: 2025-11-25 23:56:18.294077359 +0000 UTC m=+0.111400281 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 18:56:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v819: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 6.0 KiB/s wr, 116 op/s
Nov 25 18:56:19 np0005535838 nova_compute[252550]: 2025-11-25 23:56:19.590 252558 DEBUG nova.compute.manager [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 18:56:19 np0005535838 nova_compute[252550]: 2025-11-25 23:56:19.812 252558 INFO nova.compute.manager [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] instance snapshotting#033[00m
Nov 25 18:56:20 np0005535838 nova_compute[252550]: 2025-11-25 23:56:20.542 252558 INFO nova.virt.libvirt.driver [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Beginning live snapshot process#033[00m
Nov 25 18:56:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v820: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 4.9 KiB/s wr, 94 op/s
Nov 25 18:56:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:56:20 np0005535838 nova_compute[252550]: 2025-11-25 23:56:20.722 252558 DEBUG nova.storage.rbd_utils [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] creating snapshot(b26e7a6e5ea0472a9476b66aee2cf159) on rbd image(861debf8-73c8-45fe-92d9-fbfa772d34eb_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 18:56:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 25 18:56:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 25 18:56:21 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 25 18:56:21 np0005535838 nova_compute[252550]: 2025-11-25 23:56:21.820 252558 DEBUG nova.storage.rbd_utils [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] cloning vms/861debf8-73c8-45fe-92d9-fbfa772d34eb_disk@b26e7a6e5ea0472a9476b66aee2cf159 to images/ae4d3bd2-86c2-4f7b-9fdd-0fc899a2923a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 18:56:21 np0005535838 nova_compute[252550]: 2025-11-25 23:56:21.971 252558 DEBUG nova.storage.rbd_utils [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] flattening images/ae4d3bd2-86c2-4f7b-9fdd-0fc899a2923a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 18:56:22 np0005535838 nova_compute[252550]: 2025-11-25 23:56:22.153 252558 DEBUG nova.storage.rbd_utils [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] removing snapshot(b26e7a6e5ea0472a9476b66aee2cf159) on rbd image(861debf8-73c8-45fe-92d9-fbfa772d34eb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 18:56:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v822: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 22 KiB/s wr, 103 op/s
Nov 25 18:56:22 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 25 18:56:22 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 25 18:56:22 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 25 18:56:22 np0005535838 nova_compute[252550]: 2025-11-25 23:56:22.820 252558 DEBUG nova.storage.rbd_utils [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] creating snapshot(snap) on rbd image(ae4d3bd2-86c2-4f7b-9fdd-0fc899a2923a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 18:56:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 25 18:56:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 25 18:56:23 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 25 18:56:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v825: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 26 KiB/s wr, 64 op/s
Nov 25 18:56:25 np0005535838 nova_compute[252550]: 2025-11-25 23:56:25.169 252558 INFO nova.virt.libvirt.driver [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Snapshot image upload complete#033[00m
Nov 25 18:56:25 np0005535838 nova_compute[252550]: 2025-11-25 23:56:25.170 252558 INFO nova.compute.manager [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Took 5.36 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 18:56:25 np0005535838 podman[258569]: 2025-11-25 23:56:25.287956475 +0000 UTC m=+0.095245388 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 18:56:25 np0005535838 podman[258568]: 2025-11-25 23:56:25.32667386 +0000 UTC m=+0.142335957 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:56:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:56:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:56:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:56:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:56:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:56:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:56:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:56:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v826: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 29 KiB/s wr, 153 op/s
Nov 25 18:56:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 25 18:56:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 25 18:56:26 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 25 18:56:27 np0005535838 nova_compute[252550]: 2025-11-25 23:56:27.559 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "861debf8-73c8-45fe-92d9-fbfa772d34eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:56:27 np0005535838 nova_compute[252550]: 2025-11-25 23:56:27.560 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "861debf8-73c8-45fe-92d9-fbfa772d34eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:56:27 np0005535838 nova_compute[252550]: 2025-11-25 23:56:27.561 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "861debf8-73c8-45fe-92d9-fbfa772d34eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:56:27 np0005535838 nova_compute[252550]: 2025-11-25 23:56:27.561 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "861debf8-73c8-45fe-92d9-fbfa772d34eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:56:27 np0005535838 nova_compute[252550]: 2025-11-25 23:56:27.562 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "861debf8-73c8-45fe-92d9-fbfa772d34eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:56:27 np0005535838 nova_compute[252550]: 2025-11-25 23:56:27.564 252558 INFO nova.compute.manager [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Terminating instance#033[00m
Nov 25 18:56:27 np0005535838 nova_compute[252550]: 2025-11-25 23:56:27.565 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "refresh_cache-861debf8-73c8-45fe-92d9-fbfa772d34eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 18:56:27 np0005535838 nova_compute[252550]: 2025-11-25 23:56:27.566 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquired lock "refresh_cache-861debf8-73c8-45fe-92d9-fbfa772d34eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 18:56:27 np0005535838 nova_compute[252550]: 2025-11-25 23:56:27.567 252558 DEBUG nova.network.neutron [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 18:56:28 np0005535838 nova_compute[252550]: 2025-11-25 23:56:28.223 252558 DEBUG nova.network.neutron [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 18:56:28 np0005535838 nova_compute[252550]: 2025-11-25 23:56:28.448 252558 DEBUG nova.network.neutron [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 18:56:28 np0005535838 nova_compute[252550]: 2025-11-25 23:56:28.504 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Releasing lock "refresh_cache-861debf8-73c8-45fe-92d9-fbfa772d34eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 18:56:28 np0005535838 nova_compute[252550]: 2025-11-25 23:56:28.505 252558 DEBUG nova.compute.manager [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 18:56:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v828: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 6.7 KiB/s wr, 164 op/s
Nov 25 18:56:28 np0005535838 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 25 18:56:28 np0005535838 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 1.131s CPU time.
Nov 25 18:56:28 np0005535838 systemd-machined[213892]: Machine qemu-2-instance-00000002 terminated.
Nov 25 18:56:28 np0005535838 nova_compute[252550]: 2025-11-25 23:56:28.732 252558 INFO nova.virt.libvirt.driver [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Instance destroyed successfully.#033[00m
Nov 25 18:56:28 np0005535838 nova_compute[252550]: 2025-11-25 23:56:28.732 252558 DEBUG nova.objects.instance [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lazy-loading 'resources' on Instance uuid 861debf8-73c8-45fe-92d9-fbfa772d34eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 18:56:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 25 18:56:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 25 18:56:29 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 25 18:56:30 np0005535838 nova_compute[252550]: 2025-11-25 23:56:30.223 252558 INFO nova.virt.libvirt.driver [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Deleting instance files /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb_del#033[00m
Nov 25 18:56:30 np0005535838 nova_compute[252550]: 2025-11-25 23:56:30.224 252558 INFO nova.virt.libvirt.driver [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Deletion of /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb_del complete#033[00m
Nov 25 18:56:30 np0005535838 nova_compute[252550]: 2025-11-25 23:56:30.377 252558 DEBUG nova.virt.libvirt.host [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 25 18:56:30 np0005535838 nova_compute[252550]: 2025-11-25 23:56:30.378 252558 INFO nova.virt.libvirt.host [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] UEFI support detected#033[00m
Nov 25 18:56:30 np0005535838 nova_compute[252550]: 2025-11-25 23:56:30.380 252558 INFO nova.compute.manager [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Took 1.87 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 18:56:30 np0005535838 nova_compute[252550]: 2025-11-25 23:56:30.381 252558 DEBUG oslo.service.loopingcall [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 18:56:30 np0005535838 nova_compute[252550]: 2025-11-25 23:56:30.382 252558 DEBUG nova.compute.manager [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 18:56:30 np0005535838 nova_compute[252550]: 2025-11-25 23:56:30.382 252558 DEBUG nova.network.neutron [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 18:56:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v830: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 5.9 KiB/s wr, 145 op/s
Nov 25 18:56:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:56:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 25 18:56:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 25 18:56:30 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 25 18:56:31 np0005535838 nova_compute[252550]: 2025-11-25 23:56:31.220 252558 DEBUG nova.network.neutron [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 18:56:31 np0005535838 nova_compute[252550]: 2025-11-25 23:56:31.307 252558 DEBUG nova.network.neutron [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 18:56:31 np0005535838 nova_compute[252550]: 2025-11-25 23:56:31.402 252558 INFO nova.compute.manager [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Took 1.02 seconds to deallocate network for instance.#033[00m
Nov 25 18:56:31 np0005535838 nova_compute[252550]: 2025-11-25 23:56:31.612 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:56:31 np0005535838 nova_compute[252550]: 2025-11-25 23:56:31.612 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:56:31 np0005535838 nova_compute[252550]: 2025-11-25 23:56:31.712 252558 DEBUG oslo_concurrency.processutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:56:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:56:32 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2076522244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:56:32 np0005535838 nova_compute[252550]: 2025-11-25 23:56:32.196 252558 DEBUG oslo_concurrency.processutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:56:32 np0005535838 nova_compute[252550]: 2025-11-25 23:56:32.204 252558 DEBUG nova.compute.provider_tree [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 18:56:32 np0005535838 nova_compute[252550]: 2025-11-25 23:56:32.261 252558 DEBUG nova.scheduler.client.report [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 18:56:32 np0005535838 nova_compute[252550]: 2025-11-25 23:56:32.474 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:56:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v832: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 5.8 KiB/s wr, 137 op/s
Nov 25 18:56:32 np0005535838 nova_compute[252550]: 2025-11-25 23:56:32.565 252558 INFO nova.scheduler.client.report [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Deleted allocations for instance 861debf8-73c8-45fe-92d9-fbfa772d34eb#033[00m
Nov 25 18:56:32 np0005535838 nova_compute[252550]: 2025-11-25 23:56:32.856 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "861debf8-73c8-45fe-92d9-fbfa772d34eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:56:34 np0005535838 nova_compute[252550]: 2025-11-25 23:56:34.398 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:56:34 np0005535838 nova_compute[252550]: 2025-11-25 23:56:34.398 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:56:34 np0005535838 nova_compute[252550]: 2025-11-25 23:56:34.399 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:56:34 np0005535838 nova_compute[252550]: 2025-11-25 23:56:34.399 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:56:34 np0005535838 nova_compute[252550]: 2025-11-25 23:56:34.399 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:56:34 np0005535838 nova_compute[252550]: 2025-11-25 23:56:34.401 252558 INFO nova.compute.manager [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Terminating instance#033[00m
Nov 25 18:56:34 np0005535838 nova_compute[252550]: 2025-11-25 23:56:34.403 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "refresh_cache-bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 18:56:34 np0005535838 nova_compute[252550]: 2025-11-25 23:56:34.403 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquired lock "refresh_cache-bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 18:56:34 np0005535838 nova_compute[252550]: 2025-11-25 23:56:34.404 252558 DEBUG nova.network.neutron [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 18:56:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v833: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.5 KiB/s wr, 106 op/s
Nov 25 18:56:34 np0005535838 nova_compute[252550]: 2025-11-25 23:56:34.645 252558 DEBUG nova.network.neutron [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 18:56:34 np0005535838 nova_compute[252550]: 2025-11-25 23:56:34.884 252558 DEBUG nova.network.neutron [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 18:56:34 np0005535838 nova_compute[252550]: 2025-11-25 23:56:34.900 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Releasing lock "refresh_cache-bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 18:56:34 np0005535838 nova_compute[252550]: 2025-11-25 23:56:34.901 252558 DEBUG nova.compute.manager [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 18:56:34 np0005535838 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 25 18:56:34 np0005535838 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 1.223s CPU time.
Nov 25 18:56:34 np0005535838 systemd-machined[213892]: Machine qemu-1-instance-00000001 terminated.
Nov 25 18:56:35 np0005535838 nova_compute[252550]: 2025-11-25 23:56:35.130 252558 INFO nova.virt.libvirt.driver [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Instance destroyed successfully.#033[00m
Nov 25 18:56:35 np0005535838 nova_compute[252550]: 2025-11-25 23:56:35.131 252558 DEBUG nova.objects.instance [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lazy-loading 'resources' on Instance uuid bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 18:56:35 np0005535838 nova_compute[252550]: 2025-11-25 23:56:35.357 252558 INFO nova.virt.libvirt.driver [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Deleting instance files /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_del#033[00m
Nov 25 18:56:35 np0005535838 nova_compute[252550]: 2025-11-25 23:56:35.359 252558 INFO nova.virt.libvirt.driver [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Deletion of /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_del complete#033[00m
Nov 25 18:56:35 np0005535838 nova_compute[252550]: 2025-11-25 23:56:35.433 252558 INFO nova.compute.manager [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Took 0.53 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 18:56:35 np0005535838 nova_compute[252550]: 2025-11-25 23:56:35.434 252558 DEBUG oslo.service.loopingcall [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 18:56:35 np0005535838 nova_compute[252550]: 2025-11-25 23:56:35.435 252558 DEBUG nova.compute.manager [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 18:56:35 np0005535838 nova_compute[252550]: 2025-11-25 23:56:35.435 252558 DEBUG nova.network.neutron [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 18:56:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:56:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 25 18:56:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 25 18:56:35 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 25 18:56:35 np0005535838 nova_compute[252550]: 2025-11-25 23:56:35.953 252558 DEBUG nova.network.neutron [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 18:56:35 np0005535838 nova_compute[252550]: 2025-11-25 23:56:35.972 252558 DEBUG nova.network.neutron [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 18:56:35 np0005535838 nova_compute[252550]: 2025-11-25 23:56:35.990 252558 INFO nova.compute.manager [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Took 0.56 seconds to deallocate network for instance.#033[00m
Nov 25 18:56:36 np0005535838 nova_compute[252550]: 2025-11-25 23:56:36.266 252558 INFO nova.compute.manager [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Took 0.28 seconds to detach 1 volumes for instance.#033[00m
Nov 25 18:56:36 np0005535838 nova_compute[252550]: 2025-11-25 23:56:36.268 252558 DEBUG nova.compute.manager [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Deleting volume: 6a6b9d67-6cf8-4dcc-abf1-e7df17195818 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 25 18:56:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v835: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 2.5 KiB/s wr, 71 op/s
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:56:36 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 3f68cd52-6cd8-48b9-960c-b98ee7b3714d does not exist
Nov 25 18:56:36 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev e802d9a8-3186-4530-b532-29bda0af174c does not exist
Nov 25 18:56:36 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 172aa32d-adea-42f2-ada9-fbbf95b9179b does not exist
Nov 25 18:56:36 np0005535838 nova_compute[252550]: 2025-11-25 23:56:36.761 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:56:36 np0005535838 nova_compute[252550]: 2025-11-25 23:56:36.762 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:56:36 np0005535838 nova_compute[252550]: 2025-11-25 23:56:36.825 252558 DEBUG oslo_concurrency.processutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:56:36 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:56:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:56:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1502974040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:56:37 np0005535838 nova_compute[252550]: 2025-11-25 23:56:37.285 252558 DEBUG oslo_concurrency.processutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:56:37 np0005535838 nova_compute[252550]: 2025-11-25 23:56:37.294 252558 DEBUG nova.compute.provider_tree [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 18:56:37 np0005535838 podman[258971]: 2025-11-25 23:56:37.463671668 +0000 UTC m=+0.045604770 container create 5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_noyce, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:56:37 np0005535838 systemd[1]: Started libpod-conmon-5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96.scope.
Nov 25 18:56:37 np0005535838 nova_compute[252550]: 2025-11-25 23:56:37.520 252558 DEBUG nova.scheduler.client.report [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 18:56:37 np0005535838 podman[258971]: 2025-11-25 23:56:37.44205476 +0000 UTC m=+0.023987892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:56:37 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:56:37 np0005535838 podman[258971]: 2025-11-25 23:56:37.568654445 +0000 UTC m=+0.150587618 container init 5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_noyce, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 18:56:37 np0005535838 podman[258971]: 2025-11-25 23:56:37.580319097 +0000 UTC m=+0.162252219 container start 5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_noyce, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:56:37 np0005535838 podman[258971]: 2025-11-25 23:56:37.583672327 +0000 UTC m=+0.165605459 container attach 5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_noyce, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 18:56:37 np0005535838 magical_noyce[258985]: 167 167
Nov 25 18:56:37 np0005535838 systemd[1]: libpod-5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96.scope: Deactivated successfully.
Nov 25 18:56:37 np0005535838 podman[258971]: 2025-11-25 23:56:37.589784351 +0000 UTC m=+0.171717483 container died 5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 18:56:37 np0005535838 systemd[1]: var-lib-containers-storage-overlay-7772ce9b3e689a7de297b1374e900ceeb675d16a88472188871961456ffedb30-merged.mount: Deactivated successfully.
Nov 25 18:56:37 np0005535838 podman[258971]: 2025-11-25 23:56:37.642289075 +0000 UTC m=+0.224222167 container remove 5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_noyce, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:56:37 np0005535838 systemd[1]: libpod-conmon-5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96.scope: Deactivated successfully.
Nov 25 18:56:37 np0005535838 nova_compute[252550]: 2025-11-25 23:56:37.855 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:56:37 np0005535838 podman[259012]: 2025-11-25 23:56:37.868239327 +0000 UTC m=+0.077458462 container create d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:56:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 25 18:56:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 25 18:56:37 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 25 18:56:37 np0005535838 systemd[1]: Started libpod-conmon-d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e.scope.
Nov 25 18:56:37 np0005535838 podman[259012]: 2025-11-25 23:56:37.838756179 +0000 UTC m=+0.047975354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:56:37 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:56:37 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da2206a777cbdbfda9d3f20ef3ff1d1b062b5d97c9edc166b7f2c92cb2b9d0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:56:37 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da2206a777cbdbfda9d3f20ef3ff1d1b062b5d97c9edc166b7f2c92cb2b9d0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:56:37 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da2206a777cbdbfda9d3f20ef3ff1d1b062b5d97c9edc166b7f2c92cb2b9d0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:56:37 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da2206a777cbdbfda9d3f20ef3ff1d1b062b5d97c9edc166b7f2c92cb2b9d0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:56:37 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da2206a777cbdbfda9d3f20ef3ff1d1b062b5d97c9edc166b7f2c92cb2b9d0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:56:37 np0005535838 podman[259012]: 2025-11-25 23:56:37.998958643 +0000 UTC m=+0.208177748 container init d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:56:38 np0005535838 podman[259012]: 2025-11-25 23:56:38.005317543 +0000 UTC m=+0.214536668 container start d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 18:56:38 np0005535838 podman[259012]: 2025-11-25 23:56:38.010838261 +0000 UTC m=+0.220057386 container attach d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 18:56:38 np0005535838 nova_compute[252550]: 2025-11-25 23:56:38.038 252558 INFO nova.scheduler.client.report [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Deleted allocations for instance bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91#033[00m
Nov 25 18:56:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v837: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 3.3 KiB/s wr, 77 op/s
Nov 25 18:56:38 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:56:38 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2298838289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:56:38 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:56:38 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2298838289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:56:38 np0005535838 nova_compute[252550]: 2025-11-25 23:56:38.755 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:56:39 np0005535838 wizardly_perlman[259030]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:56:39 np0005535838 wizardly_perlman[259030]: --> relative data size: 1.0
Nov 25 18:56:39 np0005535838 wizardly_perlman[259030]: --> All data devices are unavailable
Nov 25 18:56:39 np0005535838 systemd[1]: libpod-d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e.scope: Deactivated successfully.
Nov 25 18:56:39 np0005535838 podman[259012]: 2025-11-25 23:56:39.177295625 +0000 UTC m=+1.386514730 container died d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 18:56:39 np0005535838 systemd[1]: libpod-d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e.scope: Consumed 1.107s CPU time.
Nov 25 18:56:39 np0005535838 systemd[1]: var-lib-containers-storage-overlay-4da2206a777cbdbfda9d3f20ef3ff1d1b062b5d97c9edc166b7f2c92cb2b9d0c-merged.mount: Deactivated successfully.
Nov 25 18:56:39 np0005535838 podman[259012]: 2025-11-25 23:56:39.248750396 +0000 UTC m=+1.457969491 container remove d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 18:56:39 np0005535838 systemd[1]: libpod-conmon-d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e.scope: Deactivated successfully.
Nov 25 18:56:40 np0005535838 podman[259217]: 2025-11-25 23:56:40.11638893 +0000 UTC m=+0.056043110 container create 58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:56:40 np0005535838 systemd[1]: Started libpod-conmon-58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88.scope.
Nov 25 18:56:40 np0005535838 podman[259217]: 2025-11-25 23:56:40.091038522 +0000 UTC m=+0.030692772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:56:40 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:56:40 np0005535838 podman[259217]: 2025-11-25 23:56:40.21362859 +0000 UTC m=+0.153282800 container init 58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rosalind, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:56:40 np0005535838 podman[259217]: 2025-11-25 23:56:40.2233714 +0000 UTC m=+0.163025580 container start 58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:56:40 np0005535838 podman[259217]: 2025-11-25 23:56:40.226731331 +0000 UTC m=+0.166385551 container attach 58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rosalind, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 18:56:40 np0005535838 tender_rosalind[259233]: 167 167
Nov 25 18:56:40 np0005535838 systemd[1]: libpod-58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88.scope: Deactivated successfully.
Nov 25 18:56:40 np0005535838 podman[259217]: 2025-11-25 23:56:40.231846847 +0000 UTC m=+0.171501057 container died 58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:56:40 np0005535838 systemd[1]: var-lib-containers-storage-overlay-05a30ce1c2f1dcdb84bf54f232b5af3cb0b73da11a2f17378c58339967abe0f8-merged.mount: Deactivated successfully.
Nov 25 18:56:40 np0005535838 podman[259217]: 2025-11-25 23:56:40.283733914 +0000 UTC m=+0.223388124 container remove 58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:56:40 np0005535838 systemd[1]: libpod-conmon-58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88.scope: Deactivated successfully.
Nov 25 18:56:40 np0005535838 podman[259259]: 2025-11-25 23:56:40.510101818 +0000 UTC m=+0.058351701 container create 24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 18:56:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v838: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Nov 25 18:56:40 np0005535838 systemd[1]: Started libpod-conmon-24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093.scope.
Nov 25 18:56:40 np0005535838 podman[259259]: 2025-11-25 23:56:40.491988434 +0000 UTC m=+0.040238347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:56:40 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:56:40 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0333a495d56db7be6822c44007d40165e352ec317647609736f990d5301a815b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:56:40 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0333a495d56db7be6822c44007d40165e352ec317647609736f990d5301a815b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:56:40 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0333a495d56db7be6822c44007d40165e352ec317647609736f990d5301a815b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:56:40 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0333a495d56db7be6822c44007d40165e352ec317647609736f990d5301a815b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:56:40 np0005535838 podman[259259]: 2025-11-25 23:56:40.616902654 +0000 UTC m=+0.165152537 container init 24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mirzakhani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:56:40 np0005535838 podman[259259]: 2025-11-25 23:56:40.634763162 +0000 UTC m=+0.183013045 container start 24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 18:56:40 np0005535838 podman[259259]: 2025-11-25 23:56:40.638817831 +0000 UTC m=+0.187067714 container attach 24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 18:56:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:56:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:56:40.765 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:56:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:56:40.767 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:56:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:56:40.768 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]: {
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:    "0": [
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:        {
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "devices": [
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "/dev/loop3"
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            ],
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_name": "ceph_lv0",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_size": "21470642176",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "name": "ceph_lv0",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "tags": {
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.cluster_name": "ceph",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.crush_device_class": "",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.encrypted": "0",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.osd_id": "0",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.type": "block",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.vdo": "0"
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            },
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "type": "block",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "vg_name": "ceph_vg0"
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:        }
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:    ],
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:    "1": [
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:        {
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "devices": [
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "/dev/loop4"
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            ],
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_name": "ceph_lv1",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_size": "21470642176",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "name": "ceph_lv1",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "tags": {
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.cluster_name": "ceph",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.crush_device_class": "",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.encrypted": "0",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.osd_id": "1",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.type": "block",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.vdo": "0"
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            },
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "type": "block",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "vg_name": "ceph_vg1"
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:        }
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:    ],
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:    "2": [
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:        {
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "devices": [
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "/dev/loop5"
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            ],
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_name": "ceph_lv2",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_size": "21470642176",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "name": "ceph_lv2",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "tags": {
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.cluster_name": "ceph",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.crush_device_class": "",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.encrypted": "0",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.osd_id": "2",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.type": "block",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:                "ceph.vdo": "0"
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            },
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "type": "block",
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:            "vg_name": "ceph_vg2"
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:        }
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]:    ]
Nov 25 18:56:41 np0005535838 blissful_mirzakhani[259275]: }
Nov 25 18:56:41 np0005535838 systemd[1]: libpod-24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093.scope: Deactivated successfully.
Nov 25 18:56:41 np0005535838 podman[259259]: 2025-11-25 23:56:41.408626538 +0000 UTC m=+0.956876411 container died 24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mirzakhani, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 18:56:41 np0005535838 systemd[1]: var-lib-containers-storage-overlay-0333a495d56db7be6822c44007d40165e352ec317647609736f990d5301a815b-merged.mount: Deactivated successfully.
Nov 25 18:56:41 np0005535838 podman[259259]: 2025-11-25 23:56:41.480160111 +0000 UTC m=+1.028409994 container remove 24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mirzakhani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 18:56:41 np0005535838 systemd[1]: libpod-conmon-24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093.scope: Deactivated successfully.
Nov 25 18:56:42 np0005535838 podman[259438]: 2025-11-25 23:56:42.320016671 +0000 UTC m=+0.066921401 container create dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:56:42 np0005535838 systemd[1]: Started libpod-conmon-dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4.scope.
Nov 25 18:56:42 np0005535838 podman[259438]: 2025-11-25 23:56:42.290565043 +0000 UTC m=+0.037469813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:56:42 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:56:42 np0005535838 podman[259438]: 2025-11-25 23:56:42.420833057 +0000 UTC m=+0.167737827 container init dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 18:56:42 np0005535838 podman[259438]: 2025-11-25 23:56:42.432218022 +0000 UTC m=+0.179122742 container start dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:56:42 np0005535838 podman[259438]: 2025-11-25 23:56:42.436591058 +0000 UTC m=+0.183495778 container attach dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:56:42 np0005535838 unruffled_joliot[259454]: 167 167
Nov 25 18:56:42 np0005535838 systemd[1]: libpod-dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4.scope: Deactivated successfully.
Nov 25 18:56:42 np0005535838 podman[259438]: 2025-11-25 23:56:42.440632507 +0000 UTC m=+0.187537227 container died dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 18:56:42 np0005535838 systemd[1]: var-lib-containers-storage-overlay-c0199f5cc8f5026ea0e5979b791f59876af8fd8626d2e029a989bc121e307e24-merged.mount: Deactivated successfully.
Nov 25 18:56:42 np0005535838 podman[259438]: 2025-11-25 23:56:42.488942538 +0000 UTC m=+0.235847268 container remove dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:56:42 np0005535838 systemd[1]: libpod-conmon-dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4.scope: Deactivated successfully.
Nov 25 18:56:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v839: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.7 KiB/s wr, 52 op/s
Nov 25 18:56:42 np0005535838 podman[259478]: 2025-11-25 23:56:42.766756258 +0000 UTC m=+0.064820325 container create 0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:56:42 np0005535838 systemd[1]: Started libpod-conmon-0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d.scope.
Nov 25 18:56:42 np0005535838 podman[259478]: 2025-11-25 23:56:42.74552305 +0000 UTC m=+0.043587117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:56:42 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:56:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a24eacf50eab24a562d0dde8543e71bbdb6dc79c187210c25195cf25d2dfaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:56:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a24eacf50eab24a562d0dde8543e71bbdb6dc79c187210c25195cf25d2dfaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:56:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a24eacf50eab24a562d0dde8543e71bbdb6dc79c187210c25195cf25d2dfaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:56:42 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a24eacf50eab24a562d0dde8543e71bbdb6dc79c187210c25195cf25d2dfaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:56:42 np0005535838 podman[259478]: 2025-11-25 23:56:42.885099253 +0000 UTC m=+0.183163360 container init 0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 18:56:42 np0005535838 podman[259478]: 2025-11-25 23:56:42.900853173 +0000 UTC m=+0.198917240 container start 0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:56:42 np0005535838 podman[259478]: 2025-11-25 23:56:42.905271882 +0000 UTC m=+0.203335939 container attach 0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 18:56:43 np0005535838 nova_compute[252550]: 2025-11-25 23:56:43.730 252558 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764114988.729541, 861debf8-73c8-45fe-92d9-fbfa772d34eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 18:56:43 np0005535838 nova_compute[252550]: 2025-11-25 23:56:43.732 252558 INFO nova.compute.manager [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] VM Stopped (Lifecycle Event)#033[00m
Nov 25 18:56:43 np0005535838 nova_compute[252550]: 2025-11-25 23:56:43.918 252558 DEBUG nova.compute.manager [None req-700a83da-3630-416d-8345-ea562f9ad019 - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]: {
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "osd_id": 2,
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "type": "bluestore"
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:    },
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "osd_id": 1,
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "type": "bluestore"
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:    },
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "osd_id": 0,
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:        "type": "bluestore"
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]:    }
Nov 25 18:56:43 np0005535838 crazy_kapitsa[259494]: }
Nov 25 18:56:44 np0005535838 systemd[1]: libpod-0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d.scope: Deactivated successfully.
Nov 25 18:56:44 np0005535838 systemd[1]: libpod-0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d.scope: Consumed 1.124s CPU time.
Nov 25 18:56:44 np0005535838 podman[259478]: 2025-11-25 23:56:44.019048998 +0000 UTC m=+1.317113085 container died 0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 18:56:44 np0005535838 systemd[1]: var-lib-containers-storage-overlay-27a24eacf50eab24a562d0dde8543e71bbdb6dc79c187210c25195cf25d2dfaf-merged.mount: Deactivated successfully.
Nov 25 18:56:44 np0005535838 podman[259478]: 2025-11-25 23:56:44.099413457 +0000 UTC m=+1.397477494 container remove 0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:56:44 np0005535838 systemd[1]: libpod-conmon-0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d.scope: Deactivated successfully.
Nov 25 18:56:44 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:56:44 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:56:44 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:56:44 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:56:44 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 11a5c889-aeac-446f-835f-78247efc7bf9 does not exist
Nov 25 18:56:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v840: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.6 KiB/s wr, 47 op/s
Nov 25 18:56:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:56:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:56:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:56:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 25 18:56:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 25 18:56:45 np0005535838 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 25 18:56:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v842: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 236 B/s wr, 20 op/s
Nov 25 18:56:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v843: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 204 B/s wr, 18 op/s
Nov 25 18:56:49 np0005535838 podman[259589]: 2025-11-25 23:56:49.280381572 +0000 UTC m=+0.093035480 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:56:50 np0005535838 nova_compute[252550]: 2025-11-25 23:56:50.128 252558 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764114995.1264982, bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 18:56:50 np0005535838 nova_compute[252550]: 2025-11-25 23:56:50.129 252558 INFO nova.compute.manager [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] VM Stopped (Lifecycle Event)#033[00m
Nov 25 18:56:50 np0005535838 nova_compute[252550]: 2025-11-25 23:56:50.271 252558 DEBUG nova.compute.manager [None req-0f65cb1f-472c-4d4b-afd3-b7c5f75861ca - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 18:56:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v844: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 204 B/s wr, 18 op/s
Nov 25 18:56:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:56:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v845: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:56:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v846: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:56:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:56:56
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['backups', 'images', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'vms']
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:56:56 np0005535838 podman[259610]: 2025-11-25 23:56:56.242168032 +0000 UTC m=+0.062727859 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:56:56 np0005535838 podman[259609]: 2025-11-25 23:56:56.303288017 +0000 UTC m=+0.119736574 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 18:56:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v847: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:56:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v848: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v849: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:57:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:57:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v850: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v851: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:57:05 np0005535838 nova_compute[252550]: 2025-11-25 23:57:05.823 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:57:05 np0005535838 nova_compute[252550]: 2025-11-25 23:57:05.824 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:57:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v852: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:06 np0005535838 nova_compute[252550]: 2025-11-25 23:57:06.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:57:06 np0005535838 nova_compute[252550]: 2025-11-25 23:57:06.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 18:57:07 np0005535838 nova_compute[252550]: 2025-11-25 23:57:07.817 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:57:07 np0005535838 nova_compute[252550]: 2025-11-25 23:57:07.871 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:57:07 np0005535838 nova_compute[252550]: 2025-11-25 23:57:07.871 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 18:57:07 np0005535838 nova_compute[252550]: 2025-11-25 23:57:07.871 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 18:57:07 np0005535838 nova_compute[252550]: 2025-11-25 23:57:07.899 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 18:57:07 np0005535838 nova_compute[252550]: 2025-11-25 23:57:07.900 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:57:07 np0005535838 nova_compute[252550]: 2025-11-25 23:57:07.900 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:57:07 np0005535838 nova_compute[252550]: 2025-11-25 23:57:07.942 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:57:07 np0005535838 nova_compute[252550]: 2025-11-25 23:57:07.943 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:57:07 np0005535838 nova_compute[252550]: 2025-11-25 23:57:07.943 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:57:07 np0005535838 nova_compute[252550]: 2025-11-25 23:57:07.943 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 18:57:07 np0005535838 nova_compute[252550]: 2025-11-25 23:57:07.944 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:57:08 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:57:08 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2988001353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:57:08 np0005535838 nova_compute[252550]: 2025-11-25 23:57:08.370 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:57:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v853: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:08 np0005535838 nova_compute[252550]: 2025-11-25 23:57:08.606 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 18:57:08 np0005535838 nova_compute[252550]: 2025-11-25 23:57:08.607 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5238MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 18:57:08 np0005535838 nova_compute[252550]: 2025-11-25 23:57:08.607 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:57:08 np0005535838 nova_compute[252550]: 2025-11-25 23:57:08.608 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:57:08 np0005535838 nova_compute[252550]: 2025-11-25 23:57:08.784 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 18:57:08 np0005535838 nova_compute[252550]: 2025-11-25 23:57:08.785 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 18:57:08 np0005535838 nova_compute[252550]: 2025-11-25 23:57:08.807 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:57:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:57:09 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820429378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:57:09 np0005535838 nova_compute[252550]: 2025-11-25 23:57:09.233 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:57:09 np0005535838 nova_compute[252550]: 2025-11-25 23:57:09.240 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 18:57:09 np0005535838 nova_compute[252550]: 2025-11-25 23:57:09.315 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 18:57:09 np0005535838 nova_compute[252550]: 2025-11-25 23:57:09.482 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 18:57:09 np0005535838 nova_compute[252550]: 2025-11-25 23:57:09.483 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:57:10 np0005535838 nova_compute[252550]: 2025-11-25 23:57:10.406 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:57:10 np0005535838 nova_compute[252550]: 2025-11-25 23:57:10.406 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:57:10 np0005535838 nova_compute[252550]: 2025-11-25 23:57:10.407 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:57:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v854: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:57:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v855: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v856: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:57:15 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:57:15.681 160725 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:82:13', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '36:f3:66:b7:57:d1'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 18:57:15 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:57:15.682 160725 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 18:57:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v857: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:57:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2157042791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:57:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:57:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2157042791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:57:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v858: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:20 np0005535838 podman[259696]: 2025-11-25 23:57:20.256803675 +0000 UTC m=+0.079513298 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 18:57:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v859: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:57:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v860: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v861: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:24 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:57:24.684 160725 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ba84045-48af-49e3-86f7-35b32300977f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.682094) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115045682130, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1948, "num_deletes": 266, "total_data_size": 1977437, "memory_usage": 2020312, "flush_reason": "Manual Compaction"}
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115045696446, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1377443, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15723, "largest_seqno": 17670, "table_properties": {"data_size": 1370089, "index_size": 4172, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17896, "raw_average_key_size": 21, "raw_value_size": 1354256, "raw_average_value_size": 1610, "num_data_blocks": 187, "num_entries": 841, "num_filter_entries": 841, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764114901, "oldest_key_time": 1764114901, "file_creation_time": 1764115045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 14424 microseconds, and 7957 cpu microseconds.
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.696508) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1377443 bytes OK
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.696544) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.698365) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.698385) EVENT_LOG_v1 {"time_micros": 1764115045698379, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.698405) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1968935, prev total WAL file size 1968935, number of live WAL files 2.
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.699246) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1345KB)], [38(5624KB)]
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115045699290, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 7136795, "oldest_snapshot_seqno": -1}
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 3904 keys, 5598075 bytes, temperature: kUnknown
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115045741289, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 5598075, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5570501, "index_size": 16712, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 91755, "raw_average_key_size": 23, "raw_value_size": 5498850, "raw_average_value_size": 1408, "num_data_blocks": 720, "num_entries": 3904, "num_filter_entries": 3904, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764115045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.741558) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 5598075 bytes
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.743151) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.6 rd, 133.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 5.5 +0.0 blob) out(5.3 +0.0 blob), read-write-amplify(9.2) write-amplify(4.1) OK, records in: 4370, records dropped: 466 output_compression: NoCompression
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.743224) EVENT_LOG_v1 {"time_micros": 1764115045743167, "job": 18, "event": "compaction_finished", "compaction_time_micros": 42078, "compaction_time_cpu_micros": 27829, "output_level": 6, "num_output_files": 1, "total_output_size": 5598075, "num_input_records": 4370, "num_output_records": 3904, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115045743909, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115045746271, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.699128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.746323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.746328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.746330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.746332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:57:25 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.746334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:57:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:57:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:57:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:57:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:57:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:57:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:57:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v862: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:27 np0005535838 podman[259717]: 2025-11-25 23:57:27.236036498 +0000 UTC m=+0.045288412 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 18:57:27 np0005535838 podman[259716]: 2025-11-25 23:57:27.256052264 +0000 UTC m=+0.074326060 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 18:57:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v863: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v864: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:57:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v865: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v866: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:57:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v867: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v868: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v869: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:40 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:57:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:57:40.766 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:57:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:57:40.766 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:57:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-25 23:57:40.767 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:57:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v870: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v871: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:57:45 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 52db2392-fb66-4d02-b889-2ef60b3c1fd8 does not exist
Nov 25 18:57:45 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 094f1874-3121-42b2-8170-77abbe15cf52 does not exist
Nov 25 18:57:45 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev ff4a0a3e-4b59-424c-90c5-c5323c79707b does not exist
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:57:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:57:46 np0005535838 podman[260035]: 2025-11-25 23:57:46.28014763 +0000 UTC m=+0.055224019 container create 55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:57:46 np0005535838 systemd[1]: Started libpod-conmon-55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6.scope.
Nov 25 18:57:46 np0005535838 podman[260035]: 2025-11-25 23:57:46.252456899 +0000 UTC m=+0.027533338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:57:46 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:57:46 np0005535838 podman[260035]: 2025-11-25 23:57:46.37629688 +0000 UTC m=+0.151373239 container init 55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:57:46 np0005535838 podman[260035]: 2025-11-25 23:57:46.385792625 +0000 UTC m=+0.160869004 container start 55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 18:57:46 np0005535838 podman[260035]: 2025-11-25 23:57:46.389615547 +0000 UTC m=+0.164691936 container attach 55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 25 18:57:46 np0005535838 optimistic_lalande[260051]: 167 167
Nov 25 18:57:46 np0005535838 systemd[1]: libpod-55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6.scope: Deactivated successfully.
Nov 25 18:57:46 np0005535838 conmon[260051]: conmon 55462a34330c2a458f37 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6.scope/container/memory.events
Nov 25 18:57:46 np0005535838 podman[260035]: 2025-11-25 23:57:46.393430099 +0000 UTC m=+0.168506478 container died 55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 18:57:46 np0005535838 systemd[1]: var-lib-containers-storage-overlay-4010bdd78fb8fd8801a0c35b1219ebeddc60d0656e9b9ec13814843a59efc72e-merged.mount: Deactivated successfully.
Nov 25 18:57:46 np0005535838 podman[260035]: 2025-11-25 23:57:46.43498627 +0000 UTC m=+0.210062659 container remove 55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 18:57:46 np0005535838 systemd[1]: libpod-conmon-55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6.scope: Deactivated successfully.
Nov 25 18:57:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v872: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:46 np0005535838 podman[260075]: 2025-11-25 23:57:46.674865416 +0000 UTC m=+0.070898057 container create 1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:57:46 np0005535838 systemd[1]: Started libpod-conmon-1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683.scope.
Nov 25 18:57:46 np0005535838 podman[260075]: 2025-11-25 23:57:46.64808501 +0000 UTC m=+0.044117711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:57:46 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:57:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4dffd57be29662be3da65f05004965059fb7c2fbf58a94f4a58413abdb37a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:57:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4dffd57be29662be3da65f05004965059fb7c2fbf58a94f4a58413abdb37a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:57:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4dffd57be29662be3da65f05004965059fb7c2fbf58a94f4a58413abdb37a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:57:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4dffd57be29662be3da65f05004965059fb7c2fbf58a94f4a58413abdb37a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:57:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4dffd57be29662be3da65f05004965059fb7c2fbf58a94f4a58413abdb37a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:57:46 np0005535838 podman[260075]: 2025-11-25 23:57:46.771452129 +0000 UTC m=+0.167484840 container init 1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:57:46 np0005535838 podman[260075]: 2025-11-25 23:57:46.790056036 +0000 UTC m=+0.186088667 container start 1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:57:46 np0005535838 podman[260075]: 2025-11-25 23:57:46.79394431 +0000 UTC m=+0.189976981 container attach 1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:57:47 np0005535838 cranky_mendeleev[260091]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:57:47 np0005535838 cranky_mendeleev[260091]: --> relative data size: 1.0
Nov 25 18:57:47 np0005535838 cranky_mendeleev[260091]: --> All data devices are unavailable
Nov 25 18:57:47 np0005535838 systemd[1]: libpod-1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683.scope: Deactivated successfully.
Nov 25 18:57:47 np0005535838 podman[260075]: 2025-11-25 23:57:47.860440071 +0000 UTC m=+1.256472712 container died 1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 18:57:47 np0005535838 systemd[1]: libpod-1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683.scope: Consumed 1.027s CPU time.
Nov 25 18:57:47 np0005535838 systemd[1]: var-lib-containers-storage-overlay-dc4dffd57be29662be3da65f05004965059fb7c2fbf58a94f4a58413abdb37a4-merged.mount: Deactivated successfully.
Nov 25 18:57:47 np0005535838 podman[260075]: 2025-11-25 23:57:47.924680609 +0000 UTC m=+1.320713200 container remove 1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 18:57:47 np0005535838 systemd[1]: libpod-conmon-1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683.scope: Deactivated successfully.
Nov 25 18:57:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v873: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:48 np0005535838 podman[260276]: 2025-11-25 23:57:48.754223844 +0000 UTC m=+0.069058649 container create 39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 18:57:48 np0005535838 systemd[1]: Started libpod-conmon-39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35.scope.
Nov 25 18:57:48 np0005535838 podman[260276]: 2025-11-25 23:57:48.728794344 +0000 UTC m=+0.043629239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:57:48 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:57:48 np0005535838 podman[260276]: 2025-11-25 23:57:48.847294732 +0000 UTC m=+0.162129567 container init 39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 18:57:48 np0005535838 podman[260276]: 2025-11-25 23:57:48.854482194 +0000 UTC m=+0.169316999 container start 39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:57:48 np0005535838 podman[260276]: 2025-11-25 23:57:48.857506806 +0000 UTC m=+0.172341601 container attach 39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:57:48 np0005535838 epic_yalow[260292]: 167 167
Nov 25 18:57:48 np0005535838 systemd[1]: libpod-39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35.scope: Deactivated successfully.
Nov 25 18:57:48 np0005535838 podman[260276]: 2025-11-25 23:57:48.859511639 +0000 UTC m=+0.174346434 container died 39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 18:57:48 np0005535838 systemd[1]: var-lib-containers-storage-overlay-f488e99b0b8ecdb08c6d440bf61de4d4cc326a40962490efc6ef3b8d2084b1db-merged.mount: Deactivated successfully.
Nov 25 18:57:48 np0005535838 podman[260276]: 2025-11-25 23:57:48.890494478 +0000 UTC m=+0.205329283 container remove 39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 25 18:57:48 np0005535838 systemd[1]: libpod-conmon-39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35.scope: Deactivated successfully.
Nov 25 18:57:49 np0005535838 podman[260316]: 2025-11-25 23:57:49.043807558 +0000 UTC m=+0.042502268 container create a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 18:57:49 np0005535838 systemd[1]: Started libpod-conmon-a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35.scope.
Nov 25 18:57:49 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:57:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee23609cfba62df47eb745b19f7a510c7f17cfb433cab7e745bc19c7b3fc8f6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:57:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee23609cfba62df47eb745b19f7a510c7f17cfb433cab7e745bc19c7b3fc8f6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:57:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee23609cfba62df47eb745b19f7a510c7f17cfb433cab7e745bc19c7b3fc8f6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:57:49 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee23609cfba62df47eb745b19f7a510c7f17cfb433cab7e745bc19c7b3fc8f6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:57:49 np0005535838 podman[260316]: 2025-11-25 23:57:49.114041876 +0000 UTC m=+0.112736586 container init a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:57:49 np0005535838 podman[260316]: 2025-11-25 23:57:49.121112505 +0000 UTC m=+0.119807195 container start a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 18:57:49 np0005535838 podman[260316]: 2025-11-25 23:57:49.02629985 +0000 UTC m=+0.024994560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:57:49 np0005535838 podman[260316]: 2025-11-25 23:57:49.12427559 +0000 UTC m=+0.122970290 container attach a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 25 18:57:49 np0005535838 keen_taussig[260332]: {
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:    "0": [
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:        {
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "devices": [
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "/dev/loop3"
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            ],
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_name": "ceph_lv0",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_size": "21470642176",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "name": "ceph_lv0",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "tags": {
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.cluster_name": "ceph",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.crush_device_class": "",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.encrypted": "0",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.osd_id": "0",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.type": "block",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.vdo": "0"
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            },
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "type": "block",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "vg_name": "ceph_vg0"
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:        }
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:    ],
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:    "1": [
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:        {
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "devices": [
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "/dev/loop4"
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            ],
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_name": "ceph_lv1",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_size": "21470642176",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "name": "ceph_lv1",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "tags": {
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.cluster_name": "ceph",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.crush_device_class": "",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.encrypted": "0",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.osd_id": "1",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.type": "block",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.vdo": "0"
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            },
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "type": "block",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "vg_name": "ceph_vg1"
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:        }
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:    ],
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:    "2": [
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:        {
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "devices": [
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "/dev/loop5"
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            ],
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_name": "ceph_lv2",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_size": "21470642176",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "name": "ceph_lv2",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "tags": {
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.cephx_lockbox_secret": "",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.cluster_name": "ceph",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.crush_device_class": "",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.encrypted": "0",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.osd_id": "2",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.type": "block",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:                "ceph.vdo": "0"
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            },
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "type": "block",
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:            "vg_name": "ceph_vg2"
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:        }
Nov 25 18:57:49 np0005535838 keen_taussig[260332]:    ]
Nov 25 18:57:49 np0005535838 keen_taussig[260332]: }
Nov 25 18:57:49 np0005535838 systemd[1]: libpod-a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35.scope: Deactivated successfully.
Nov 25 18:57:49 np0005535838 podman[260316]: 2025-11-25 23:57:49.869939471 +0000 UTC m=+0.868634191 container died a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 25 18:57:49 np0005535838 systemd[1]: var-lib-containers-storage-overlay-ee23609cfba62df47eb745b19f7a510c7f17cfb433cab7e745bc19c7b3fc8f6a-merged.mount: Deactivated successfully.
Nov 25 18:57:49 np0005535838 podman[260316]: 2025-11-25 23:57:49.939268685 +0000 UTC m=+0.937963385 container remove a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:57:49 np0005535838 systemd[1]: libpod-conmon-a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35.scope: Deactivated successfully.
Nov 25 18:57:50 np0005535838 podman[260454]: 2025-11-25 23:57:50.437971742 +0000 UTC m=+0.083662708 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 18:57:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v874: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:50 np0005535838 podman[260516]: 2025-11-25 23:57:50.676377828 +0000 UTC m=+0.047588324 container create b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:57:50 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:57:50 np0005535838 systemd[1]: Started libpod-conmon-b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c.scope.
Nov 25 18:57:50 np0005535838 podman[260516]: 2025-11-25 23:57:50.652024216 +0000 UTC m=+0.023234772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:57:50 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:57:50 np0005535838 podman[260516]: 2025-11-25 23:57:50.785502376 +0000 UTC m=+0.156712892 container init b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 18:57:50 np0005535838 podman[260516]: 2025-11-25 23:57:50.796304494 +0000 UTC m=+0.167514980 container start b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:57:50 np0005535838 podman[260516]: 2025-11-25 23:57:50.800593219 +0000 UTC m=+0.171803775 container attach b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 18:57:50 np0005535838 frosty_hypatia[260532]: 167 167
Nov 25 18:57:50 np0005535838 systemd[1]: libpod-b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c.scope: Deactivated successfully.
Nov 25 18:57:50 np0005535838 podman[260516]: 2025-11-25 23:57:50.803545318 +0000 UTC m=+0.174755804 container died b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:57:50 np0005535838 systemd[1]: var-lib-containers-storage-overlay-2cfe88d61f6e11fe4a2fc479d3e58e3d9d780e9d7762fdfde99705a92b1e9d32-merged.mount: Deactivated successfully.
Nov 25 18:57:50 np0005535838 podman[260516]: 2025-11-25 23:57:50.854531232 +0000 UTC m=+0.225741718 container remove b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:57:50 np0005535838 systemd[1]: libpod-conmon-b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c.scope: Deactivated successfully.
Nov 25 18:57:51 np0005535838 podman[260557]: 2025-11-25 23:57:51.081865631 +0000 UTC m=+0.048233990 container create 25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 18:57:51 np0005535838 systemd[1]: Started libpod-conmon-25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9.scope.
Nov 25 18:57:51 np0005535838 systemd[1]: Started libcrun container.
Nov 25 18:57:51 np0005535838 podman[260557]: 2025-11-25 23:57:51.067265321 +0000 UTC m=+0.033633670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:57:51 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3737e2d3464d58113743d730ff818df3b46b8816af120d2209c3cacba139334f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:57:51 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3737e2d3464d58113743d730ff818df3b46b8816af120d2209c3cacba139334f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:57:51 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3737e2d3464d58113743d730ff818df3b46b8816af120d2209c3cacba139334f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:57:51 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3737e2d3464d58113743d730ff818df3b46b8816af120d2209c3cacba139334f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:57:51 np0005535838 podman[260557]: 2025-11-25 23:57:51.204128621 +0000 UTC m=+0.170497010 container init 25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:57:51 np0005535838 podman[260557]: 2025-11-25 23:57:51.215767072 +0000 UTC m=+0.182135441 container start 25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 18:57:51 np0005535838 podman[260557]: 2025-11-25 23:57:51.219697828 +0000 UTC m=+0.186066237 container attach 25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]: {
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "osd_id": 2,
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "type": "bluestore"
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:    },
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "osd_id": 1,
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "type": "bluestore"
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:    },
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "osd_id": 0,
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:        "type": "bluestore"
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]:    }
Nov 25 18:57:52 np0005535838 upbeat_hamilton[260573]: }
Nov 25 18:57:52 np0005535838 systemd[1]: libpod-25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9.scope: Deactivated successfully.
Nov 25 18:57:52 np0005535838 podman[260557]: 2025-11-25 23:57:52.313490919 +0000 UTC m=+1.279859248 container died 25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:57:52 np0005535838 systemd[1]: libpod-25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9.scope: Consumed 1.108s CPU time.
Nov 25 18:57:52 np0005535838 systemd[1]: var-lib-containers-storage-overlay-3737e2d3464d58113743d730ff818df3b46b8816af120d2209c3cacba139334f-merged.mount: Deactivated successfully.
Nov 25 18:57:52 np0005535838 podman[260557]: 2025-11-25 23:57:52.38385499 +0000 UTC m=+1.350223329 container remove 25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 18:57:52 np0005535838 systemd[1]: libpod-conmon-25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9.scope: Deactivated successfully.
Nov 25 18:57:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:57:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:57:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:57:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:57:52 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev f621be23-c7be-4453-8439-22026dc0a501 does not exist
Nov 25 18:57:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v875: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:53 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:57:53 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 18:57:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v876: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:55 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:57:56
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['backups', 'vms', 'images', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.mgr']
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:57:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v877: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:57:58 np0005535838 podman[260671]: 2025-11-25 23:57:58.279694953 +0000 UTC m=+0.095939097 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 18:57:58 np0005535838 podman[260670]: 2025-11-25 23:57:58.358351457 +0000 UTC m=+0.176254985 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:57:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v878: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v879: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:00 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:58:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:58:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v880: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v881: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.693632) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115085693669, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 572, "num_deletes": 251, "total_data_size": 421533, "memory_usage": 431840, "flush_reason": "Manual Compaction"}
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115085698964, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 415976, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17671, "largest_seqno": 18242, "table_properties": {"data_size": 412833, "index_size": 1115, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7278, "raw_average_key_size": 19, "raw_value_size": 406539, "raw_average_value_size": 1069, "num_data_blocks": 50, "num_entries": 380, "num_filter_entries": 380, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764115045, "oldest_key_time": 1764115045, "file_creation_time": 1764115085, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 5390 microseconds, and 2847 cpu microseconds.
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.699020) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 415976 bytes OK
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.699041) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.700384) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.700405) EVENT_LOG_v1 {"time_micros": 1764115085700399, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.700423) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 418358, prev total WAL file size 418358, number of live WAL files 2.
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.700989) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(406KB)], [41(5466KB)]
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115085701040, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 6014051, "oldest_snapshot_seqno": -1}
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 3771 keys, 4825966 bytes, temperature: kUnknown
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115085738105, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 4825966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4800393, "index_size": 15032, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 89630, "raw_average_key_size": 23, "raw_value_size": 4732153, "raw_average_value_size": 1254, "num_data_blocks": 641, "num_entries": 3771, "num_filter_entries": 3771, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764115085, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.738450) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 4825966 bytes
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.740233) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.6 rd, 129.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 5.3 +0.0 blob) out(4.6 +0.0 blob), read-write-amplify(26.1) write-amplify(11.6) OK, records in: 4284, records dropped: 513 output_compression: NoCompression
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.740266) EVENT_LOG_v1 {"time_micros": 1764115085740252, "job": 20, "event": "compaction_finished", "compaction_time_micros": 37217, "compaction_time_cpu_micros": 24411, "output_level": 6, "num_output_files": 1, "total_output_size": 4825966, "num_input_records": 4284, "num_output_records": 3771, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115085740524, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115085742384, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.700871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.742525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.742534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.742537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.742540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:58:05 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.742542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:58:05 np0005535838 nova_compute[252550]: 2025-11-25 23:58:05.823 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:58:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v882: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:06 np0005535838 nova_compute[252550]: 2025-11-25 23:58:06.817 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:58:06 np0005535838 nova_compute[252550]: 2025-11-25 23:58:06.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:58:06 np0005535838 nova_compute[252550]: 2025-11-25 23:58:06.821 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 18:58:07 np0005535838 nova_compute[252550]: 2025-11-25 23:58:07.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:58:07 np0005535838 nova_compute[252550]: 2025-11-25 23:58:07.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 18:58:07 np0005535838 nova_compute[252550]: 2025-11-25 23:58:07.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 18:58:07 np0005535838 nova_compute[252550]: 2025-11-25 23:58:07.879 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 18:58:07 np0005535838 nova_compute[252550]: 2025-11-25 23:58:07.880 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:58:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v883: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:08 np0005535838 nova_compute[252550]: 2025-11-25 23:58:08.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:58:09 np0005535838 systemd-logind[789]: New session 52 of user zuul.
Nov 25 18:58:09 np0005535838 systemd[1]: Started Session 52 of User zuul.
Nov 25 18:58:09 np0005535838 nova_compute[252550]: 2025-11-25 23:58:09.534 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:58:09 np0005535838 nova_compute[252550]: 2025-11-25 23:58:09.535 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:58:09 np0005535838 nova_compute[252550]: 2025-11-25 23:58:09.535 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:58:09 np0005535838 nova_compute[252550]: 2025-11-25 23:58:09.536 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 18:58:09 np0005535838 nova_compute[252550]: 2025-11-25 23:58:09.536 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:58:09 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:58:09 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1847819789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:58:09 np0005535838 nova_compute[252550]: 2025-11-25 23:58:09.998 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:58:10 np0005535838 nova_compute[252550]: 2025-11-25 23:58:10.168 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 18:58:10 np0005535838 nova_compute[252550]: 2025-11-25 23:58:10.169 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5218MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 18:58:10 np0005535838 nova_compute[252550]: 2025-11-25 23:58:10.170 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 18:58:10 np0005535838 nova_compute[252550]: 2025-11-25 23:58:10.170 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 18:58:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v884: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:10 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:58:10 np0005535838 nova_compute[252550]: 2025-11-25 23:58:10.944 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 18:58:10 np0005535838 nova_compute[252550]: 2025-11-25 23:58:10.945 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 18:58:10 np0005535838 nova_compute[252550]: 2025-11-25 23:58:10.965 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 18:58:11 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:58:11 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1465709058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:58:11 np0005535838 nova_compute[252550]: 2025-11-25 23:58:11.479 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 18:58:11 np0005535838 nova_compute[252550]: 2025-11-25 23:58:11.486 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 18:58:11 np0005535838 nova_compute[252550]: 2025-11-25 23:58:11.541 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 18:58:11 np0005535838 nova_compute[252550]: 2025-11-25 23:58:11.544 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 18:58:11 np0005535838 nova_compute[252550]: 2025-11-25 23:58:11.544 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 18:58:12 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14704 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v885: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:12 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14706 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:13 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 18:58:13 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065359790' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 18:58:13 np0005535838 nova_compute[252550]: 2025-11-25 23:58:13.545 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:58:13 np0005535838 nova_compute[252550]: 2025-11-25 23:58:13.546 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:58:13 np0005535838 nova_compute[252550]: 2025-11-25 23:58:13.546 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 18:58:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v886: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:58:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v887: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:58:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2133630908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:58:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:58:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2133630908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:58:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v888: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:18 np0005535838 ovs-vsctl[261070]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 18:58:19 np0005535838 virtqemud[251995]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 18:58:19 np0005535838 virtqemud[251995]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 18:58:19 np0005535838 virtqemud[251995]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 18:58:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v889: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:20 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: cache status {prefix=cache status} (starting...)
Nov 25 18:58:20 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:58:20 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: client ls {prefix=client ls} (starting...)
Nov 25 18:58:20 np0005535838 podman[261333]: 2025-11-25 23:58:20.953730204 +0000 UTC m=+0.339690756 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:58:21 np0005535838 lvm[261432]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 18:58:21 np0005535838 lvm[261432]: VG ceph_vg0 finished
Nov 25 18:58:21 np0005535838 lvm[261462]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 18:58:21 np0005535838 lvm[261462]: VG ceph_vg2 finished
Nov 25 18:58:21 np0005535838 lvm[261465]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 18:58:21 np0005535838 lvm[261465]: VG ceph_vg1 finished
Nov 25 18:58:21 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 18:58:21 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14714 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:21 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 18:58:21 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 18:58:21 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14716 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:21 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 18:58:21 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 18:58:22 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 18:58:22 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 18:58:22 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 18:58:22 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 25 18:58:22 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/168829886' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 18:58:22 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14722 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:22 np0005535838 ceph-mgr[75954]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 18:58:22 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:58:22.505+0000 7f36737f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 18:58:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v890: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:22 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: ops {prefix=ops} (starting...)
Nov 25 18:58:22 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:58:22 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/166352582' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:58:22 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 25 18:58:22 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784088767' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 18:58:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 25 18:58:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3608204187' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 18:58:23 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: session ls {prefix=session ls} (starting...)
Nov 25 18:58:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 18:58:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2655172804' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 18:58:23 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: status {prefix=status} (starting...)
Nov 25 18:58:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 25 18:58:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040537186' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 18:58:23 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 18:58:23 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1944917165' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 18:58:23 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14736 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:24 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 18:58:24 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/402274012' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 18:58:24 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14740 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:24 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 18:58:24 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3424918935' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 18:58:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v891: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:24 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 25 18:58:24 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2302812068' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 18:58:24 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 18:58:24 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401400288' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 18:58:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 25 18:58:25 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/35714324' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 18:58:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 18:58:25 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183257717' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 18:58:25 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14752 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:25 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:58:25.526+0000 7f36737f5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 18:58:25 np0005535838 ceph-mgr[75954]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 18:58:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:58:25 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14754 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 25 18:58:25 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2611995556' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 18:58:26 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14758 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:58:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:58:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:58:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:58:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:58:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:58:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 25 18:58:26 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3045823816' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 18:58:26 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14762 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v892: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:26 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14766 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016459 3 0.000043
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016459 3 0.000038
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016457 3 0.000068
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016375 3 0.000096
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016436 3 0.000100
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016215 3 0.000106
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016356 3 0.000112
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015997 3 0.000072
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=37/40 n=0 ec=20/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016193 3 0.000089
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=37/40 n=0 ec=20/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=37/40 n=0 ec=20/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=37/40 n=0 ec=20/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015494 3 0.000254
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015293 3 0.000099
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016498 3 0.000076
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015072 3 0.000711
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013990 3 0.001576
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015737 3 0.002123
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013893 3 0.000061
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015233 3 0.000122
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013842 3 0.000971
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013766 3 0.000184
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013657 3 0.000141
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013660 3 0.002636
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013714 3 0.000152
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013461 3 0.001285
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013556 3 0.000261
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 56000512 unmapped: 2662400 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 40 heartbeat osd_stat(store_statfs(0x4fe165000/0x0/0x4ffc00000, data 0x28d03/0x67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 40 handle_osd_map epochs [41,41], i have 40, src has [1,41]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 56016896 unmapped: 2646016 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 41 heartbeat osd_stat(store_statfs(0x4fe161000/0x0/0x4ffc00000, data 0x2a173/0x6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 56041472 unmapped: 2621440 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 320659 data_alloc: 218103808 data_used: 0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 56147968 unmapped: 2514944 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 41 handle_osd_map epochs [42,42], i have 41, src has [1,42]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.485052109s of 11.599431038s, submitted: 234
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000038
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000179 1 0.000073
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000053 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000026
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000246 1 0.000068
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000035
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000156 1 0.000073
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000074 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000026
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000142 1 0.000047
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000037
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000128 1 0.000073
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000208 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000083 1 0.000067
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001609 1 0.000150
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000019
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000027
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000012 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000009
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000030
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000017 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000036
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.543269 13 0.000091
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.553523 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.553617 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.553666 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456751823s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465843201s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456731796s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465843201s@ mbc={}] exit Reset 0.000033 1 0.000052
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456731796s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465843201s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456731796s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465843201s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456731796s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465843201s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456731796s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465843201s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456731796s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465843201s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.478674 4 0.000026
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.495312 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.505432 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.505516 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520897865s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530090332s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520885468s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530090332s@ mbc={}] exit Reset 0.000023 1 0.000036
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520885468s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530090332s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520885468s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530090332s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520885468s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530090332s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520885468s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530090332s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520885468s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530090332s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.478663 4 0.000109
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.495329 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.506386 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.506419 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520854950s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530136108s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520842552s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530136108s@ mbc={}] exit Reset 0.000021 1 0.000032
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520842552s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530136108s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520842552s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530136108s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520842552s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530136108s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520842552s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530136108s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520842552s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530136108s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.543760 13 0.000051
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.554367 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.554459 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.554536 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456146240s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465744019s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456072807s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465744019s@ mbc={}] exit Reset 0.000123 1 0.000214
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456072807s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465744019s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456072807s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465744019s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456072807s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465744019s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456072807s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465744019s@ mbc={}] exit Start 0.000014 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456072807s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465744019s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.544096 13 0.000050
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.554573 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.554816 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.554853 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.479349 4 0.000039
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.495994 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.504659 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.504692 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455876350s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465759277s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520113945s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530021667s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455842018s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] exit Reset 0.000062 1 0.000088
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455842018s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455842018s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455842018s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455842018s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520085335s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530021667s@ mbc={}] exit Reset 0.000054 1 0.000084
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455842018s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520085335s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530021667s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520085335s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530021667s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520085335s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530021667s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520085335s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530021667s@ mbc={}] exit Start 0.000012 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520085335s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530021667s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.544084 13 0.000386
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.555031 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.555066 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.555080 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.479517 4 0.000093
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455184937s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465225220s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496119 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.504630 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.504670 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455161095s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] exit Reset 0.000060 1 0.000062
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519988060s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530075073s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455161095s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455161095s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455161095s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455161095s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] exit Start 0.000008 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455161095s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519961357s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530075073s@ mbc={}] exit Reset 0.000048 1 0.000078
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519961357s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530075073s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519961357s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530075073s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519961357s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530075073s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519961357s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530075073s@ mbc={}] exit Start 0.000012 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519961357s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530075073s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.479829 4 0.000049
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496254 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.504737 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.504777 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.544674 13 0.000044
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.555360 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519800186s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530029297s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.555459 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.555488 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519779205s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530029297s@ mbc={}] exit Reset 0.000039 1 0.000063
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519779205s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530029297s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519779205s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530029297s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519779205s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530029297s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519779205s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530029297s@ mbc={}] exit Start 0.000008 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519779205s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530029297s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454953194s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465225220s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454927444s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] exit Reset 0.000054 1 0.000082
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454927444s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454927444s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454927444s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.479790 4 0.000048
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496837 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.505215 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.505244 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519281387s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530158997s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519255638s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530158997s@ mbc={}] exit Reset 0.000047 1 0.000578
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519255638s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530158997s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519255638s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530158997s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519255638s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530158997s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454927444s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] exit Start 0.000013 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519255638s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530158997s@ mbc={}] exit Start 0.000008 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454927444s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519255638s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530158997s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.545584 13 0.000049
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.556397 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.556472 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.556501 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.480510 4 0.000060
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497037 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.505627 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.505655 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454054832s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465164185s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454019547s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465164185s@ mbc={}] exit Reset 0.000066 1 0.000095
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454019547s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465164185s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454019547s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465164185s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454019547s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465164185s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454019547s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465164185s@ mbc={}] exit Start 0.000014 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454019547s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465164185s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.479984 4 0.000075
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497165 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.505501 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.505542 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518911362s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530174255s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518884659s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530174255s@ mbc={}] exit Reset 0.000049 1 0.000703
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518884659s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530174255s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518884659s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530174255s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518884659s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530174255s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.545773 13 0.000080
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518884659s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530174255s@ mbc={}] exit Start 0.000014 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518884659s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530174255s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.556813 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.556890 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.556920 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518821716s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530166626s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518773079s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530166626s@ mbc={}] exit Reset 0.000295 1 0.000316
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518773079s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530166626s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.480691 4 0.000031
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518773079s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530166626s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497227 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518773079s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530166626s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.506993 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.543632 13 0.000036
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453702927s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465126038s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518773079s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530166626s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518773079s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530166626s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.556117 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453660965s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] exit Reset 0.000079 1 0.000160
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.556273 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453660965s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.556362 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453660965s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453660965s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453660965s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] exit Start 0.000021 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453660965s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.507018 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454210281s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465759277s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518668175s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530258179s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518647194s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530258179s@ mbc={}] exit Reset 0.000040 1 0.000199
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518647194s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530258179s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518647194s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530258179s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454153061s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] exit Reset 0.000109 1 0.002219
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518647194s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530258179s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518647194s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530258179s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.546190 13 0.000165
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518647194s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530258179s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454153061s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.557340 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454153061s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.557382 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454153061s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.557404 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454153061s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] exit Start 0.000021 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454153061s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453030586s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464744568s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.481078 4 0.000027
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497335 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.507508 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.507537 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518463135s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530212402s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.546294 13 0.000071
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452983856s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464744568s@ mbc={}] exit Reset 0.000083 1 0.000130
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518447876s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530212402s@ mbc={}] exit Reset 0.000029 1 0.000044
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.557214 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.557299 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.557328 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452983856s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464744568s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452983856s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464744568s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452983856s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464744568s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.546533 13 0.000082
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452983856s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464744568s@ mbc={}] exit Start 0.000042 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.557708 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452983856s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464744568s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.557789 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.557822 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518447876s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530212402s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518447876s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530212402s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518447876s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530212402s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518447876s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530212402s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518447876s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530212402s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452710152s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464645386s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452659607s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] exit Reset 0.000093 1 0.000140
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.546757 13 0.000055
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.557940 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452659607s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.558011 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452659607s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452659607s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.558051 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452659607s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] exit Start 0.000021 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452659607s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452497482s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464584351s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.481346 4 0.000037
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497511 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.505855 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.505886 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452450752s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464584351s@ mbc={}] exit Reset 0.000086 1 0.000143
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452450752s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464584351s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518067360s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530242920s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452450752s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464584351s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452450752s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464584351s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518048286s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530242920s@ mbc={}] exit Reset 0.000037 1 0.000169
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518048286s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530242920s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.481448 4 0.000026
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452450752s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464584351s@ mbc={}] exit Start 0.000020 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497257 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.506056 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452450752s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464584351s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.506088 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518048286s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530242920s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518048286s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530242920s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518048286s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530242920s@ mbc={}] exit Start 0.000067 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518048286s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530242920s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518050194s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530319214s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.544370 13 0.000256
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.557009 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.557232 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.557279 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518003464s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] exit Reset 0.000086 1 0.000135
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453290939s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465820312s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518003464s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518003464s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518003464s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518003464s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] exit Start 0.000021 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453218460s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465820312s@ mbc={}] exit Reset 0.000120 1 0.002195
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453218460s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465820312s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518003464s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453218460s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465820312s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453218460s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465820312s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453218460s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465820312s@ mbc={}] exit Start 0.000024 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453218460s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465820312s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.547125 13 0.000070
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.558759 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.558797 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.558810 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451832771s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464576721s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.482033 4 0.000026
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451805115s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] exit Reset 0.000044 1 0.000434
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451805115s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496093 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451805115s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451805115s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451805115s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451805115s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.508694 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.547497 13 0.000062
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.508752 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452794075s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465126038s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.558814 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.558889 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.558920 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452293396s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] exit Reset 0.001051 1 0.001076
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.482146 4 0.000031
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452293396s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497702 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.508923 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452293396s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.508946 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452293396s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452293396s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517469406s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530319214s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452293396s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451684952s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464576721s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517417908s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] exit Reset 0.000094 1 0.000147
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517417908s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517417908s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517417908s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451636314s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] exit Reset 0.000101 1 0.000144
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517417908s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] exit Start 0.000020 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451636314s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517417908s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451636314s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451636314s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451636314s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] exit Start 0.000021 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451636314s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517217636s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530265808s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517190933s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530265808s@ mbc={}] exit Reset 0.000211 1 0.000228
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517190933s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530265808s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517190933s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530265808s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517190933s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530265808s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517190933s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530265808s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517190933s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530265808s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.548825 13 0.000061
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.482376 4 0.000040
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.559288 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497760 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.559387 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.509336 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.559428 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.509416 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.547416 13 0.000053
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.558419 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.559457 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.559526 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451214790s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464385986s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452562332s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465751648s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451136589s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464385986s@ mbc={}] exit Reset 0.000120 1 0.000161
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451136589s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464385986s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451136589s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464385986s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451136589s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464385986s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452430725s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465751648s@ mbc={}] exit Reset 0.000155 1 0.000174
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451136589s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464385986s@ mbc={}] exit Start 0.000043 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452430725s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465751648s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452430725s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465751648s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452430725s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465751648s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452430725s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465751648s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451136589s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464385986s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452430725s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465751648s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.548172 13 0.000060
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.559280 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.559365 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.559392 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451215744s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464645386s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451201439s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] exit Reset 0.000028 1 0.000040
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.482664 4 0.000039
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451201439s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451201439s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497951 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451201439s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451201439s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.508379 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451201439s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.508413 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.549232 13 0.000072
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516825676s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530311584s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.559987 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.560076 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.560101 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450861931s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464378357s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516798973s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530311584s@ mbc={}] exit Reset 0.000049 1 0.000286
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516798973s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530311584s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516798973s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530311584s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450849533s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464378357s@ mbc={}] exit Reset 0.000023 1 0.000036
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450849533s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464378357s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450849533s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464378357s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450849533s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464378357s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450849533s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464378357s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450849533s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464378357s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.549286 13 0.000083
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.560085 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.560222 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.482814 4 0.000026
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496744 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.508812 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.560257 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.508837 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516798973s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530311584s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516772270s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530380249s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516798973s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530311584s@ mbc={}] exit Start 0.000078 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516798973s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530311584s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516757965s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530380249s@ mbc={}] exit Reset 0.000024 1 0.000038
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516757965s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530380249s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516757965s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530380249s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516757965s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530380249s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516757965s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530380249s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516757965s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530380249s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450617790s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464271545s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.549364 13 0.000060
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.549294 13 0.000216
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.560242 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.559846 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.560471 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.559893 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.560521 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450569153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464271545s@ mbc={}] exit Reset 0.000105 1 0.000152
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.559916 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450549126s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464279175s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450569153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464271545s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450569153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464271545s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450786591s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464523315s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450569153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464271545s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450569153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464271545s@ mbc={}] exit Start 0.000031 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450569153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464271545s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516482353s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530273438s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516411781s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530273438s@ mbc={}] exit Reset 0.000724 1 0.000766
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516411781s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530273438s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516411781s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530273438s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.483061 4 0.000047
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516411781s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530273438s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496920 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516411781s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530273438s@ mbc={}] exit Start 0.000025 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516411781s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530273438s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450525284s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464279175s@ mbc={}] exit Reset 0.000037 1 0.000052
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450525284s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464279175s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450525284s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464279175s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450525284s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464279175s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450525284s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464279175s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450525284s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464279175s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.483269 4 0.000030
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496961 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.508042 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.508066 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516325951s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530418396s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.549770 13 0.000061
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516311646s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530418396s@ mbc={}] exit Reset 0.000027 1 0.000043
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516311646s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530418396s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516311646s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530418396s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516311646s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530418396s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516311646s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530418396s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516311646s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530418396s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.483314 4 0.000044
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496927 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.508368 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.508404 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516283035s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530479431s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516271591s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530479431s@ mbc={}] exit Reset 0.000022 1 0.000045
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516271591s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530479431s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516271591s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530479431s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516271591s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530479431s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516271591s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530479431s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516271591s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530479431s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010290 2 0.000119
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009259 2 0.000071
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450763702s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464523315s@ mbc={}] exit Reset 0.000045 1 0.000068
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450763702s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464523315s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008987 2 0.000077
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450763702s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464523315s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009755 2 0.000074
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450763702s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464523315s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450763702s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464523315s@ mbc={}] exit Start 0.000011 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450763702s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464523315s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008522 2 0.000049
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005929 2 0.000067
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005793 2 0.000022
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005535 2 0.000039
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005686 2 0.000020
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.511394 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.511446 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515798569s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530448914s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515743256s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530448914s@ mbc={}] exit Reset 0.000101 1 0.000799
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515743256s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530448914s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515743256s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530448914s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515743256s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530448914s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515743256s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530448914s@ mbc={}] exit Start 0.000019 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515743256s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530448914s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.560875 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.561880 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.562452 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448849678s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464324951s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448781013s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464324951s@ mbc={}] exit Reset 0.000154 1 0.001453
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448781013s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464324951s@ mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448781013s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464324951s@ mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448781013s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464324951s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448781013s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464324951s@ mbc={}] exit Start 0.000027 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448781013s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464324951s@ mbc={}] enter Started/Stray
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000088 1 0.000036
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000074 1 0.000023
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000025
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000028
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 42 handle_osd_map epochs [42,42], i have 42, src has [1,42]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000028
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000029 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000013
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000065 1 0.000043
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000065 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000026
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000132 1 0.000081
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000028
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000048 1 0.000038
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000025
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000061 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000025
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000224 1 0.000039
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000028 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000171 1 0.000035
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000062 1 0.000027
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000017
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000195 1 0.000036
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000011
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000177 1 0.000034
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000123 1 0.000031
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000068 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000033
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000329 1 0.000065
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000017
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000244 1 0.000036
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000033
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000028 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000014
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000124 1 0.000040
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000014
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000132 1 0.000034
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: unregistering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.023451 2 0.000045
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.022038 2 0.000024
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020212 2 0.000024
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021885 2 0.000023
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018810 2 0.000027
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017354 2 0.000056
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016929 2 0.000021
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016264 2 0.000023
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014475 2 0.000100
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016357 2 0.000024
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015188 2 0.000041
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012612 2 0.000032
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011874 2 0.000081
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011340 2 0.000099
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010950 2 0.000026
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013448 2 0.000037
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010413 2 0.000037
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014610 2 0.000021
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010093 2 0.000057
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014152 2 0.000041
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57114624 unmapped: 1548288 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 42 handle_osd_map epochs [42,43], i have 42, src has [1,43]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 42 handle_osd_map epochs [42,43], i have 43, src has [1,43]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966562 2 0.000043
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988730 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991974 2 0.000026
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966673 2 0.000034
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997609 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990255 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966633 2 0.000080
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986948 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992159 2 0.000057
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997952 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992303 2 0.000017
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998176 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966827 2 0.000111
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988832 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966898 2 0.000026
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983245 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967003 2 0.000032
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966488 2 0.000036
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982924 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984546 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967038 2 0.000062
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985963 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992808 2 0.000039
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000410 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966658 2 0.000025
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982108 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967266 2 0.000020
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984271 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966194 2 0.000029
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979873 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993048 2 0.000028
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001744 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966205 2 0.000040
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980901 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967040 2 0.000077
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.981752 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966245 2 0.000070
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980630 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993416 2 0.000032
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002590 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967084 2 0.000056
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979873 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967063 2 0.000043
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979382 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967218 2 0.000034
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.978860 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966922 2 0.000027
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.977958 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993911 2 0.000025
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003400 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993854 2 0.000130
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004229 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994055 2 0.000023
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004851 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967186 2 0.000078
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.977479 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967375 2 0.000025
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.977948 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007613 3 0.000132
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007651 3 0.000131
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017206 4 0.000114
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017262 4 0.000093
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017309 4 0.000112
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017193 4 0.000066
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017169 4 0.000063
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017156 4 0.000059
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017177 4 0.000096
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017389 4 0.000085
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016982 4 0.000051
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016970 4 0.000049
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017065 4 0.000189
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016950 4 0.000060
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017626 4 0.000138
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016919 4 0.000098
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016882 4 0.000058
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016623 4 0.000046
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016924 4 0.000101
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016562 4 0.000117
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016691 4 0.000112
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016430 4 0.000067
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016303 4 0.000092
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015884 4 0.000071
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015939 4 0.000092
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016409 4 0.000943
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016036 4 0.000354
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016526 4 0.000732
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015912 4 0.000571
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022272 7 0.000054
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022363 7 0.000123
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021802 7 0.000114
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022935 7 0.000042
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000061 1 0.000063
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025149 7 0.000107
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000528 1 0.000081
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000566 1 0.000020
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000713 1 0.000033
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000530 1 0.000106
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034639 7 0.000096
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030956 7 0.000097
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030642 7 0.000129
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030424 7 0.000059
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030402 7 0.000125
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000125 1 0.000105
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030127 7 0.000152
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030562 7 0.000065
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033983 7 0.000069
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032532 7 0.000088
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032327 7 0.000207
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033934 7 0.000086
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034496 7 0.000117
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034254 7 0.000113
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035110 7 0.000074
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034136 7 0.000119
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031401 7 0.000063
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031498 7 0.000051
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032837 7 0.000101
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032083 7 0.000114
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000343 1 0.000032
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000443 1 0.000020
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000481 1 0.000150
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032008 7 0.000341
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030605 7 0.000291
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032369 7 0.000107
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030249 7 0.000709
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035322 7 0.000054
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033144 7 0.000098
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032620 7 0.000119
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029031 7 0.000178
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000672 1 0.000028
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030934 7 0.000113
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000704 1 0.000043
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000705 1 0.000021
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000684 1 0.000019
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000693 1 0.000017
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000852 1 0.000017
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000922 1 0.000024
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001066 1 0.000015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001161 1 0.000013
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001204 1 0.000015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001459 1 0.000145
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001406 1 0.000016
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001426 1 0.000024
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001497 1 0.000021
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001552 1 0.000020
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001370 1 0.000022
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001411 1 0.000015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.009527 1 0.000046
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.009607 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.031920 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001496 1 0.000014
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001535 1 0.000014
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001640 1 0.000053
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001711 1 0.000016
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001847 1 0.000017
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001900 1 0.000016
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001971 1 0.000064
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033730 7 0.000101
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035788 7 0.000128
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033466 7 0.000050
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034143 7 0.000175
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036886 7 0.000077
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035902 7 0.000693
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000137 1 0.000074
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033777 7 0.000055
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000098 1 0.000015
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000126 1 0.000014
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000177 1 0.000028
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000258 1 0.000014
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000302 1 0.000020
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000311 1 0.000027
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036367 7 0.000103
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035653 7 0.000239
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036310 7 0.000060
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000076 1 0.000043
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000078 1 0.000053
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000139 1 0.000016
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.017604 1 0.000036
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.018176 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.040594 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022298 1 0.000018
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.022894 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.044747 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.027884 1 0.000036
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.028626 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.051584 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.035704 1 0.000019
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.036269 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.061518 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036046 1 0.000061
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.036214 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.070897 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.043142 1 0.000063
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.043509 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.074525 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.050399 1 0.000044
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.050866 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.081587 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.057770 1 0.000032
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.058286 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.088849 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.064964 1 0.000047
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.065654 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.096165 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.072290 1 0.000033
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.073016 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.103208 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.079670 1 0.000025
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.080392 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.110978 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.087045 1 0.000023
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.087745 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.120301 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.094396 1 0.000044
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.095119 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.127528 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.101848 1 0.000049
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.102738 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.137276 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.109002 1 0.000035
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.109954 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.143919 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.116227 1 0.000084
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.117323 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.151618 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.123587 1 0.000036
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.124783 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.159915 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.130970 1 0.000027
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.132205 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.166374 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.138292 1 0.000053
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.139800 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.173825 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.145593 1 0.000028
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.147026 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.178460 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.152924 1 0.000027
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.154375 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.185899 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57352192 unmapped: 1310720 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.166205 1 0.000065
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.167842 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.200753 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.171217 1 0.000039
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.172696 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.204788 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 43 heartbeat osd_stat(store_statfs(0x4fe160000/0x0/0x4ffc00000, data 0x2be25/0x6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.177086 1 0.000164
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.178671 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.210823 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.182231 1 0.000023
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.183671 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.214300 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.189456 1 0.000027
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.190976 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.223429 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.197043 1 0.000098
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.198598 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.228882 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.204104 1 0.000039
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.205772 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.241152 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.212105 1 0.000125
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.213867 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.247068 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.218835 1 0.000047
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.220719 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.253417 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.226103 1 0.000024
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.228110 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.259111 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.233199 1 0.000040
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.233356 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.267144 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.240534 1 0.000027
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.240653 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.276547 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.248224 1 0.000032
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.250148 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.279246 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.255497 1 0.000033
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.255660 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.289865 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.262991 1 0.000026
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.263208 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.296707 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.270473 1 0.000032
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.270783 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.307703 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.277626 1 0.000028
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.277994 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.314556 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.284916 1 0.000020
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.285279 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.319091 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.291848 1 0.000039
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.291975 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.328585 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.299565 1 0.000052
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.299718 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.335470 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.307367 1 0.000029
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.307603 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.343964 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57360384 unmapped: 1302528 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57360384 unmapped: 1302528 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 306714 data_alloc: 218103808 data_used: 8192
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57393152 unmapped: 1269760 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57393152 unmapped: 1269760 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57393152 unmapped: 1269760 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 43 heartbeat osd_stat(store_statfs(0x4fe15d000/0x0/0x4ffc00000, data 0x2d2a5/0x70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 43 handle_osd_map epochs [44,47], i have 43, src has [1,47]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 43 handle_osd_map epochs [44,47], i have 47, src has [1,47]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57573376 unmapped: 1089536 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57606144 unmapped: 1056768 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 322801 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57606144 unmapped: 1056768 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 47 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x3278b/0x7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57606144 unmapped: 1056768 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57606144 unmapped: 1056768 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.924494743s of 11.253636360s, submitted: 326
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 48 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x3278b/0x7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 48 heartbeat osd_stat(store_statfs(0x4fe14d000/0x0/0x4ffc00000, data 0x33d8e/0x7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57679872 unmapped: 983040 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 48 handle_osd_map epochs [49,49], i have 48, src has [1,49]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57753600 unmapped: 909312 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 327833 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 49 heartbeat osd_stat(store_statfs(0x4fe14d000/0x0/0x4ffc00000, data 0x33d8e/0x7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57769984 unmapped: 892928 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57769984 unmapped: 892928 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57794560 unmapped: 868352 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 50 handle_osd_map epochs [51,52], i have 50, src has [1,52]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57868288 unmapped: 794624 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 52 heartbeat osd_stat(store_statfs(0x4fe142000/0x0/0x4ffc00000, data 0x3940f/0x8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57876480 unmapped: 786432 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 339221 data_alloc: 218103808 data_used: 24576
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 52 handle_osd_map epochs [52,53], i have 52, src has [1,53]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57925632 unmapped: 737280 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57966592 unmapped: 696320 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=0 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000076 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=0 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000039
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000188 1 0.000069
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001479 2 0.000067
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57974784 unmapped: 688128 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.640701294s of 10.707428932s, submitted: 14
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 54 handle_osd_map epochs [54,55], i have 55, src has [1,55]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999131 2 0.000149
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001026 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=53/55 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 55 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=53/55 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=53/55 n=1 ec=39/22 lis/c=53/39 les/c/f=55/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004944 4 0.001076
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=53/55 n=1 ec=39/22 lis/c=53/39 les/c/f=55/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=53/55 n=1 ec=39/22 lis/c=53/39 les/c/f=55/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=53/55 n=1 ec=39/22 lis/c=53/39 les/c/f=55/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57999360 unmapped: 663552 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 55 heartbeat osd_stat(store_statfs(0x4fe13b000/0x0/0x4ffc00000, data 0x3c015/0x91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 55 handle_osd_map epochs [55,56], i have 55, src has [1,56]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58032128 unmapped: 1679360 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 353755 data_alloc: 218103808 data_used: 24576
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58040320 unmapped: 1671168 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58097664 unmapped: 1613824 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58097664 unmapped: 1613824 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58114048 unmapped: 1597440 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 58 handle_osd_map epochs [58,59], i have 58, src has [1,59]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58171392 unmapped: 1540096 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 365286 data_alloc: 218103808 data_used: 32768
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 59 heartbeat osd_stat(store_statfs(0x4fe130000/0x0/0x4ffc00000, data 0x416a6/0x9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.8 deep-scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.8 deep-scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58171392 unmapped: 1540096 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58171392 unmapped: 1540096 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 59 heartbeat osd_stat(store_statfs(0x4fe12c000/0x0/0x4ffc00000, data 0x42ca9/0xa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 59 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59269120 unmapped: 442368 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe126000/0x0/0x4ffc00000, data 0x4571c/0xa6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.052805901s of 10.114706993s, submitted: 16
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f(unlocked)] enter Initial
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=0 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000059 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=0 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000021
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000131 1 0.000053
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.000837 2 0.000043
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 18:58:26 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1208784011' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59285504 unmapped: 425984 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 62 handle_osd_map epochs [62,63], i have 63, src has [1,63]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.855637 2 0.000068
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 0.856671 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.005781 3 0.000210
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000118 1 0.000086
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe122000/0x0/0x4ffc00000, data 0x46d1f/0xa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.127812 3 0.000075
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000066 0 0.000000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59359232 unmapped: 352256 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 387190 data_alloc: 218103808 data_used: 32768
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59367424 unmapped: 344064 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59367424 unmapped: 344064 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59375616 unmapped: 335872 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59375616 unmapped: 335872 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59375616 unmapped: 335872 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 385089 data_alloc: 218103808 data_used: 32768
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59383808 unmapped: 327680 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59392000 unmapped: 319488 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59408384 unmapped: 303104 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59408384 unmapped: 303104 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59432960 unmapped: 278528 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 386236 data_alloc: 218103808 data_used: 32768
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59432960 unmapped: 278528 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.723413467s of 12.791498184s, submitted: 16
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59432960 unmapped: 278528 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59441152 unmapped: 270336 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59441152 unmapped: 270336 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59482112 unmapped: 229376 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 389943 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59482112 unmapped: 229376 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59490304 unmapped: 221184 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59490304 unmapped: 221184 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59490304 unmapped: 221184 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59498496 unmapped: 212992 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 389943 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59498496 unmapped: 212992 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59498496 unmapped: 212992 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59506688 unmapped: 204800 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59514880 unmapped: 196608 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.111580849s of 13.117458344s, submitted: 2
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59514880 unmapped: 196608 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 391091 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59523072 unmapped: 188416 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59523072 unmapped: 188416 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59531264 unmapped: 180224 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59531264 unmapped: 180224 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59539456 unmapped: 172032 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 393387 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59547648 unmapped: 163840 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59555840 unmapped: 155648 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59555840 unmapped: 155648 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59564032 unmapped: 147456 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59564032 unmapped: 147456 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 395683 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59572224 unmapped: 139264 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59572224 unmapped: 139264 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59572224 unmapped: 139264 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59580416 unmapped: 131072 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59588608 unmapped: 122880 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 395683 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59596800 unmapped: 114688 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59596800 unmapped: 114688 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.988166809s of 18.023300171s, submitted: 10
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59604992 unmapped: 106496 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59613184 unmapped: 98304 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59621376 unmapped: 90112 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 397979 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59629568 unmapped: 81920 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59629568 unmapped: 81920 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 400274 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59654144 unmapped: 57344 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59654144 unmapped: 57344 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59654144 unmapped: 57344 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.980714798s of 12.043251991s, submitted: 10
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59662336 unmapped: 49152 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 402569 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59670528 unmapped: 40960 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59678720 unmapped: 32768 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59686912 unmapped: 24576 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59686912 unmapped: 24576 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59686912 unmapped: 24576 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 406013 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59695104 unmapped: 16384 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59695104 unmapped: 16384 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59703296 unmapped: 8192 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59703296 unmapped: 8192 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59703296 unmapped: 8192 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 408309 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59711488 unmapped: 0 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59711488 unmapped: 0 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59711488 unmapped: 0 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59719680 unmapped: 1040384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59719680 unmapped: 1040384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 408309 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59727872 unmapped: 1032192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59727872 unmapped: 1032192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59727872 unmapped: 1032192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59736064 unmapped: 1024000 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.015459061s of 20.058856964s, submitted: 12
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59752448 unmapped: 1007616 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 409457 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59752448 unmapped: 1007616 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59760640 unmapped: 999424 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59760640 unmapped: 999424 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59768832 unmapped: 991232 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59768832 unmapped: 991232 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 411752 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59777024 unmapped: 983040 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59777024 unmapped: 983040 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59785216 unmapped: 974848 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.15 deep-scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.15 deep-scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59801600 unmapped: 958464 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59817984 unmapped: 942080 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 414047 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59826176 unmapped: 933888 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59826176 unmapped: 933888 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 414047 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.928712845s of 17.964544296s, submitted: 10
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416341 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418635 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59916288 unmapped: 843776 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59916288 unmapped: 843776 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 835584 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418635 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.752611160s of 12.872432709s, submitted: 8
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59940864 unmapped: 819200 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59940864 unmapped: 819200 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59949056 unmapped: 811008 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 420929 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59949056 unmapped: 811008 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59957248 unmapped: 802816 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59957248 unmapped: 802816 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59957248 unmapped: 802816 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 422077 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 794624 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 794624 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 794624 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.934057236s of 11.955801964s, submitted: 6
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59981824 unmapped: 778240 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59990016 unmapped: 770048 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 423224 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 426667 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60047360 unmapped: 712704 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 427814 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60055552 unmapped: 704512 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.913698196s of 12.949940681s, submitted: 10
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60071936 unmapped: 688128 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60080128 unmapped: 679936 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60088320 unmapped: 671744 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60088320 unmapped: 671744 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60112896 unmapped: 647168 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60112896 unmapped: 647168 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60112896 unmapped: 647168 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60121088 unmapped: 638976 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60121088 unmapped: 638976 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60129280 unmapped: 630784 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60129280 unmapped: 630784 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 622592 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 622592 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60145664 unmapped: 614400 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60145664 unmapped: 614400 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60145664 unmapped: 614400 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60153856 unmapped: 606208 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60153856 unmapped: 606208 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60153856 unmapped: 606208 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60170240 unmapped: 589824 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60170240 unmapped: 589824 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60178432 unmapped: 581632 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60178432 unmapped: 581632 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60178432 unmapped: 581632 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 573440 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 573440 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 573440 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60194816 unmapped: 565248 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60194816 unmapped: 565248 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60219392 unmapped: 540672 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60227584 unmapped: 532480 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60235776 unmapped: 524288 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60235776 unmapped: 524288 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60235776 unmapped: 524288 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60260352 unmapped: 499712 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60260352 unmapped: 499712 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60260352 unmapped: 499712 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60268544 unmapped: 491520 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60276736 unmapped: 483328 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60276736 unmapped: 483328 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60276736 unmapped: 483328 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60293120 unmapped: 466944 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60293120 unmapped: 466944 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60301312 unmapped: 458752 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60301312 unmapped: 458752 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60325888 unmapped: 434176 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60334080 unmapped: 425984 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60334080 unmapped: 425984 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60342272 unmapped: 417792 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60350464 unmapped: 409600 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60350464 unmapped: 409600 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60358656 unmapped: 401408 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60358656 unmapped: 401408 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60366848 unmapped: 393216 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60366848 unmapped: 393216 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60375040 unmapped: 385024 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60375040 unmapped: 385024 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60375040 unmapped: 385024 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60383232 unmapped: 376832 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60383232 unmapped: 376832 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60383232 unmapped: 376832 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60391424 unmapped: 368640 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60391424 unmapped: 368640 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60391424 unmapped: 368640 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60399616 unmapped: 360448 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60416000 unmapped: 344064 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60424192 unmapped: 335872 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60424192 unmapped: 335872 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60432384 unmapped: 327680 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60440576 unmapped: 319488 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60440576 unmapped: 319488 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60465152 unmapped: 294912 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60465152 unmapped: 294912 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60465152 unmapped: 294912 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60473344 unmapped: 286720 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60473344 unmapped: 286720 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60481536 unmapped: 278528 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60481536 unmapped: 278528 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60489728 unmapped: 270336 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60489728 unmapped: 270336 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60489728 unmapped: 270336 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60497920 unmapped: 262144 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60497920 unmapped: 262144 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60506112 unmapped: 253952 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60506112 unmapped: 253952 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60522496 unmapped: 237568 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60538880 unmapped: 221184 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60522496 unmapped: 237568 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60522496 unmapped: 237568 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60538880 unmapped: 221184 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60538880 unmapped: 221184 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60547072 unmapped: 212992 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60547072 unmapped: 212992 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60547072 unmapped: 212992 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60563456 unmapped: 196608 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60563456 unmapped: 196608 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60563456 unmapped: 196608 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60571648 unmapped: 188416 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60571648 unmapped: 188416 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60588032 unmapped: 172032 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60596224 unmapped: 163840 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60596224 unmapped: 163840 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60604416 unmapped: 155648 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60604416 unmapped: 155648 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60612608 unmapped: 147456 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60620800 unmapped: 139264 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60620800 unmapped: 139264 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60628992 unmapped: 131072 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60628992 unmapped: 131072 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60628992 unmapped: 131072 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 122880 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 122880 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 114688 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 114688 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 106496 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60661760 unmapped: 98304 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60661760 unmapped: 98304 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60661760 unmapped: 98304 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60669952 unmapped: 90112 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60669952 unmapped: 90112 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 81920 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 81920 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 81920 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 73728 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 73728 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 73728 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 65536 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 65536 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 57344 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 57344 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 57344 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 49152 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 49152 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 40960 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 40960 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 40960 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 32768 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 32768 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 24576 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 24576 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60743680 unmapped: 16384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60743680 unmapped: 16384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60743680 unmapped: 16384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60751872 unmapped: 8192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60751872 unmapped: 8192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60751872 unmapped: 8192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 0 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 0 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60768256 unmapped: 1040384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60768256 unmapped: 1040384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60768256 unmapped: 1040384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60776448 unmapped: 1032192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60776448 unmapped: 1032192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60776448 unmapped: 1032192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60784640 unmapped: 1024000 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60784640 unmapped: 1024000 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60792832 unmapped: 1015808 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60792832 unmapped: 1015808 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60792832 unmapped: 1015808 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60801024 unmapped: 1007616 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60801024 unmapped: 1007616 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60809216 unmapped: 999424 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60809216 unmapped: 999424 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60817408 unmapped: 991232 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60817408 unmapped: 991232 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60817408 unmapped: 991232 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60825600 unmapped: 983040 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60825600 unmapped: 983040 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60825600 unmapped: 983040 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60833792 unmapped: 974848 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60833792 unmapped: 974848 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60841984 unmapped: 966656 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60841984 unmapped: 966656 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60841984 unmapped: 966656 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60850176 unmapped: 958464 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60850176 unmapped: 958464 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60858368 unmapped: 950272 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60858368 unmapped: 950272 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60858368 unmapped: 950272 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60866560 unmapped: 942080 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60866560 unmapped: 942080 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 933888 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 933888 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 933888 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60882944 unmapped: 925696 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60882944 unmapped: 925696 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60899328 unmapped: 909312 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60899328 unmapped: 909312 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60915712 unmapped: 892928 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60915712 unmapped: 892928 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60923904 unmapped: 884736 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60923904 unmapped: 884736 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60923904 unmapped: 884736 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60932096 unmapped: 876544 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60932096 unmapped: 876544 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60940288 unmapped: 868352 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60948480 unmapped: 860160 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60948480 unmapped: 860160 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60964864 unmapped: 843776 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60964864 unmapped: 843776 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60964864 unmapped: 843776 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60973056 unmapped: 835584 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60973056 unmapped: 835584 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 827392 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 827392 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 827392 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60989440 unmapped: 819200 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60989440 unmapped: 819200 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60989440 unmapped: 819200 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60997632 unmapped: 811008 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60997632 unmapped: 811008 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61005824 unmapped: 802816 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61005824 unmapped: 802816 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61022208 unmapped: 786432 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61022208 unmapped: 786432 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61022208 unmapped: 786432 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61030400 unmapped: 778240 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61030400 unmapped: 778240 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61038592 unmapped: 770048 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61038592 unmapped: 770048 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61038592 unmapped: 770048 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61046784 unmapped: 761856 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61046784 unmapped: 761856 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61054976 unmapped: 753664 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61054976 unmapped: 753664 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61054976 unmapped: 753664 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61063168 unmapped: 745472 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61063168 unmapped: 745472 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61063168 unmapped: 745472 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61071360 unmapped: 737280 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61071360 unmapped: 737280 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61071360 unmapped: 737280 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61079552 unmapped: 729088 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61079552 unmapped: 729088 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61087744 unmapped: 720896 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61087744 unmapped: 720896 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61087744 unmapped: 720896 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61095936 unmapped: 712704 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61095936 unmapped: 712704 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 704512 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 704512 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 704512 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61112320 unmapped: 696320 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61112320 unmapped: 696320 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61120512 unmapped: 688128 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61120512 unmapped: 688128 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61120512 unmapped: 688128 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 655360 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 655360 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 655360 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 565248 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 565248 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 565248 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61251584 unmapped: 557056 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61251584 unmapped: 557056 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61259776 unmapped: 548864 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61259776 unmapped: 548864 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61259776 unmapped: 548864 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61267968 unmapped: 540672 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61267968 unmapped: 540672 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 532480 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 532480 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 532480 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61284352 unmapped: 524288 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61284352 unmapped: 524288 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61292544 unmapped: 516096 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61292544 unmapped: 516096 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61292544 unmapped: 516096 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61300736 unmapped: 507904 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61300736 unmapped: 507904 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61317120 unmapped: 491520 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61317120 unmapped: 491520 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 483328 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 483328 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 483328 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61333504 unmapped: 475136 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61333504 unmapped: 475136 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 466944 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 466944 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61349888 unmapped: 458752 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61349888 unmapped: 458752 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61349888 unmapped: 458752 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61358080 unmapped: 450560 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61366272 unmapped: 442368 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61366272 unmapped: 442368 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61374464 unmapped: 434176 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61374464 unmapped: 434176 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61382656 unmapped: 425984 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61390848 unmapped: 417792 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61390848 unmapped: 417792 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61399040 unmapped: 409600 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61399040 unmapped: 409600 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61423616 unmapped: 385024 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 16.16 MB, 0.03 MB/s#012Interval WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61480960 unmapped: 327680 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61489152 unmapped: 319488 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61489152 unmapped: 319488 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61497344 unmapped: 311296 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61497344 unmapped: 311296 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61497344 unmapped: 311296 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61505536 unmapped: 303104 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61505536 unmapped: 303104 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61521920 unmapped: 286720 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61521920 unmapped: 286720 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61521920 unmapped: 286720 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61530112 unmapped: 278528 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61530112 unmapped: 278528 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61546496 unmapped: 262144 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61546496 unmapped: 262144 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61554688 unmapped: 253952 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61554688 unmapped: 253952 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61562880 unmapped: 245760 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61571072 unmapped: 237568 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61571072 unmapped: 237568 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61587456 unmapped: 221184 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61587456 unmapped: 221184 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61595648 unmapped: 212992 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61595648 unmapped: 212992 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61603840 unmapped: 204800 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61603840 unmapped: 204800 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61603840 unmapped: 204800 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61612032 unmapped: 196608 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61612032 unmapped: 196608 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61620224 unmapped: 188416 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61620224 unmapped: 188416 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61620224 unmapped: 188416 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 180224 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 180224 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61636608 unmapped: 172032 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61636608 unmapped: 172032 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61636608 unmapped: 172032 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 163840 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61652992 unmapped: 155648 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 147456 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 147456 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 147456 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61677568 unmapped: 131072 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61677568 unmapped: 131072 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 122880 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 122880 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 122880 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 114688 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 114688 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 114688 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 106496 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 106496 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 106496 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 98304 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 98304 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 81920 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 81920 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 81920 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 73728 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 73728 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 65536 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 65536 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 65536 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 57344 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 57344 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61759488 unmapped: 49152 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61759488 unmapped: 49152 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 40960 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 40960 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 40960 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 32768 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 32768 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 32768 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 24576 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 24576 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 8192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 8192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 8192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 0 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 0 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 1040384 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 1040384 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 1040384 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61825024 unmapped: 1032192 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61825024 unmapped: 1032192 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61833216 unmapped: 1024000 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61833216 unmapped: 1024000 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 1015808 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 1015808 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 1015808 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 1007616 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 1007616 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: monclient: no keepalive since 2025-11-25T23:46:45.136110+0000 (2106-02-07T06:28:15.999867+0000 seconds), reconnecting
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: monclient: found mon.compute-0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: set_mon_vals no callback set
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: mgrc handle_mgr_map Got map version 9
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: mgrc ms_handle_reset ms_handle_reset con 0x56223dd09c00
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/855624559
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: mgrc handle_mgr_configure stats_period=5
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, 
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 63 handle_osd_map epochs [64,65], i have 63, src has [1,65]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1071.016235352s of 1071.030395508s, submitted: 4
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 65 heartbeat osd_stat(store_statfs(0x4fe117000/0x0/0x4ffc00000, data 0x4afe6/0xb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 17514496 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 66 ms_handle_reset con 0x56223ea28c00 session 0x56223f2c0b40
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 17457152 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 502193 data_alloc: 218103808 data_used: 114688
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 17448960 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 17448960 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 67 ms_handle_reset con 0x56223e823c00 session 0x56223eae34a0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd4a3000/0x0/0x4ffc00000, data 0xcbc602/0xd2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 539786 data_alloc: 218103808 data_used: 122880
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd49e000/0x0/0x4ffc00000, data 0xcbdbfb/0xd2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.124011040s of 11.322376251s, submitted: 43
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.678512573s of 28.689655304s, submitted: 13
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 69 ms_handle_reset con 0x56223e822800 session 0x56223eae2d20
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 17227776 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 17227776 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 69 heartbeat osd_stat(store_statfs(0x4fd496000/0x0/0x4ffc00000, data 0xcc0a78/0xd37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 17219584 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 604393 data_alloc: 218103808 data_used: 131072
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 25485312 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 69 heartbeat osd_stat(store_statfs(0x4fbc96000/0x0/0x4ffc00000, data 0x24c0a88/0x2538000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 25436160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 69 ms_handle_reset con 0x56223e822c00 session 0x56223f2205a0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 25436160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 25419776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 71 ms_handle_reset con 0x56223e823000 session 0x56223f221a40
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 24428544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 564280 data_alloc: 218103808 data_used: 139264
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 24387584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fd48f000/0x0/0x4ffc00000, data 0xcc3640/0xd3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fd48f000/0x0/0x4ffc00000, data 0xcc3640/0xd3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 72 ms_handle_reset con 0x56223e822800 session 0x56223f707e00
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 24199168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 24199168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.976924896s of 11.311155319s, submitted: 70
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 73 ms_handle_reset con 0x56223e822c00 session 0x56223f5f1e00
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 24100864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 73 ms_handle_reset con 0x56223e823c00 session 0x56223f5f14a0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 23977984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 574353 data_alloc: 218103808 data_used: 139264
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 23920640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223ea28c00 session 0x56223f5f0960
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223e823400 session 0x56223f742f00
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 23904256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223e823400 session 0x56223f5bf4a0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223e822800 session 0x56223f5be960
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fd481000/0x0/0x4ffc00000, data 0xcc8cfc/0xd4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 23904256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 76 ms_handle_reset con 0x56223e822c00 session 0x56223f707a40
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 76 ms_handle_reset con 0x56223e823c00 session 0x56223f2c0780
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 23764992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 77 heartbeat osd_stat(store_statfs(0x4fd480000/0x0/0x4ffc00000, data 0xcca2d4/0xd4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [0,0,1,1])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 77 ms_handle_reset con 0x56223f254c00 session 0x56223f59f860
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 23625728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 77 ms_handle_reset con 0x56223ea28c00 session 0x56223f2e7860
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 588235 data_alloc: 218103808 data_used: 147456
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 23592960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 23592960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 23584768 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.005178452s of 10.492918968s, submitted: 152
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 79 ms_handle_reset con 0x56223e822c00 session 0x56223f2205a0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 79 ms_handle_reset con 0x56223e822800 session 0x56223ea89860
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 23478272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 79 ms_handle_reset con 0x56223e822400 session 0x56223ea89680
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fd479000/0x0/0x4ffc00000, data 0xcce187/0xd54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 23478272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 594951 data_alloc: 218103808 data_used: 147456
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 23478272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 23461888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 23461888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 23453696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fd476000/0x0/0x4ffc00000, data 0xccf683/0xd57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 80 ms_handle_reset con 0x56223e823c00 session 0x56223ea88f00
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 23453696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 597747 data_alloc: 218103808 data_used: 147456
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 82 ms_handle_reset con 0x56223e822400 session 0x56223f5f0960
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 23298048 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 83 ms_handle_reset con 0x56223e823c00 session 0x56223eae3860
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 23281664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 84 ms_handle_reset con 0x56223e822c00 session 0x56223eae2d20
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 22175744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.822913170s of 10.012957573s, submitted: 69
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 85 ms_handle_reset con 0x56223e822800 session 0x56223eae2b40
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 22110208 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 86 ms_handle_reset con 0x56223ea28c00 session 0x56223f2c1a40
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 86 heartbeat osd_stat(store_statfs(0x4fd45f000/0x0/0x4ffc00000, data 0xcd6c4b/0xd6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 634272 data_alloc: 218103808 data_used: 200704
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 87 heartbeat osd_stat(store_statfs(0x4fd459000/0x0/0x4ffc00000, data 0xcd996d/0xd74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 22036480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 87 ms_handle_reset con 0x56223e822800 session 0x56223f2210e0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 88 ms_handle_reset con 0x56223e822400 session 0x56223f2c10e0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 20619264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 88 ms_handle_reset con 0x56223e822c00 session 0x56223f220000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 641350 data_alloc: 218103808 data_used: 200704
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 20455424 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 89 ms_handle_reset con 0x56223e86dc00 session 0x56223f220f00
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fbe7f000/0x0/0x4ffc00000, data 0xd01d2a/0xd9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 20348928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 90 ms_handle_reset con 0x562240975000 session 0x56223f707c20
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 20234240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 91 ms_handle_reset con 0x562240975000 session 0x56223f5bef00
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.528375626s of 10.122215271s, submitted: 156
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 92 ms_handle_reset con 0x56223e822c00 session 0x56223ea01680
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 92 ms_handle_reset con 0x56223e822400 session 0x56223f11bc20
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 68911104 unmapped: 19128320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 92 ms_handle_reset con 0x56223e822800 session 0x56223e9d41e0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbe79000/0x0/0x4ffc00000, data 0xd04238/0xda0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 18169856 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 93 ms_handle_reset con 0x56223e86dc00 session 0x56223ea010e0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 652558 data_alloc: 218103808 data_used: 217088
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 93 ms_handle_reset con 0x56223e822400 session 0x56223ea005a0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 93 ms_handle_reset con 0x56223e822800 session 0x56223f2e8960
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 18112512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 18079744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fbe76000/0x0/0x4ffc00000, data 0xd06d56/0xda6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 18022400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 95 ms_handle_reset con 0x56223e822c00 session 0x56223f2e92c0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975000 session 0x56223f2e8b40
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 17973248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 664123 data_alloc: 218103808 data_used: 229376
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fbe70000/0x0/0x4ffc00000, data 0xd099ba/0xdac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 17956864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 17956864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975400 session 0x56223e9d4000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 17956864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x56223e822400 session 0x56223ea00000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975400 session 0x56223f707e00
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x56223e822800 session 0x56223f2e74a0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975000 session 0x56223dc9da40
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975c00 session 0x56223f220780
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975c00 session 0x56223f743e00
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x56223e822400 session 0x56223ea005a0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 17948672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fbe70000/0x0/0x4ffc00000, data 0xd099ba/0xdac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.487519264s of 10.742533684s, submitted: 87
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x56223e822800 session 0x56223ea010e0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x562240975000 session 0x56223f2e8b40
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x562240975400 session 0x56223f2e92c0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 668581 data_alloc: 218103808 data_used: 237568
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x562240975000 session 0x56223f2e8000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fbe6d000/0x0/0x4ffc00000, data 0xd0aea2/0xdb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 17989632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 668581 data_alloc: 218103808 data_used: 237568
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x562240975c00 session 0x56223f2c03c0
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x562240654c00 session 0x56223f2f8000
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x56224066d000 session 0x56223f706d20
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x56223e822c00 session 0x56223f743a40
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 17915904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 98 heartbeat osd_stat(store_statfs(0x4fbe6a000/0x0/0x4ffc00000, data 0xd0c45c/0xdb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 98 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: osd.2 99 ms_handle_reset con 0x562240654c00 session 0x56223f5be960
Nov 25 18:58:26 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 17907712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 17907712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 99 heartbeat osd_stat(store_statfs(0x4fbe65000/0x0/0x4ffc00000, data 0xd0de6a/0xdb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 99 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 17907712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 100 ms_handle_reset con 0x56224066cc00 session 0x56223f5f0f00
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 17850368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679111 data_alloc: 218103808 data_used: 241664
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.938570976s of 11.146072388s, submitted: 55
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 101 ms_handle_reset con 0x56224066c800 session 0x56223f220780
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 17850368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 17850368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 101 ms_handle_reset con 0x56223e822400 session 0x56223f59e5a0
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 101 ms_handle_reset con 0x56223e822800 session 0x56223ea01c20
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fbe61000/0x0/0x4ffc00000, data 0xd1066a/0xdbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 17825792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 102 ms_handle_reset con 0x56223e822400 session 0x56223f2205a0
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683786 data_alloc: 218103808 data_used: 245760
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fbe5f000/0x0/0x4ffc00000, data 0xd11c84/0xdbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 103 ms_handle_reset con 0x56223e823c00 session 0x56223f5f14a0
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 103 ms_handle_reset con 0x56223f255000 session 0x56223f2f9680
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 17768448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 103 ms_handle_reset con 0x562240654c00 session 0x56223ea01a40
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fbe5c000/0x0/0x4ffc00000, data 0xd13140/0xdc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 17768448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683002 data_alloc: 218103808 data_used: 241664
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 104 ms_handle_reset con 0x56224066c800 session 0x56223eae2780
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.860716820s of 10.143070221s, submitted: 96
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 17752064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 105 ms_handle_reset con 0x56224066d000 session 0x56223f2e9c20
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0xcf1bde/0xda2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 689942 data_alloc: 218103808 data_used: 245760
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0xcf1bde/0xda2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0xcf1bde/0xda2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 689942 data_alloc: 218103808 data_used: 245760
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 17727488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.980017662s of 12.055953979s, submitted: 14
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 692244 data_alloc: 218103808 data_used: 245760
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 17489920 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: do_command 'config diff' '{prefix=config diff}'
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: do_command 'config show' '{prefix=config show}'
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 17170432 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 16916480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 17014784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:27 np0005535838 ceph-osd[91111]: do_command 'log dump' '{prefix=log dump}'
Nov 25 18:58:27 np0005535838 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 18:58:27 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14768 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 18:58:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140068122' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 18:58:27 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14772 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:27 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 18:58:27 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2109943533' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 18:58:27 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14776 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:58:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 18:58:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/51285997' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 18:58:28 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14780 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:58:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 18:58:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3987879754' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 18:58:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v893: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:28 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14784 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:58:28 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 25 18:58:28 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2177530395' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 18:58:29 np0005535838 podman[262588]: 2025-11-25 23:58:29.259780266 +0000 UTC m=+0.070323373 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 18:58:29 np0005535838 podman[262577]: 2025-11-25 23:58:29.302196039 +0000 UTC m=+0.121204946 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:58:29 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14792 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:58:29 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:58:29.459+0000 7f36737f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 18:58:29 np0005535838 ceph-mgr[75954]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 18:58:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 25 18:58:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2190590891' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 18:58:29 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 25 18:58:29 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/620990861' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 18:58:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 25 18:58:30 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583569641' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 18:58:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 25 18:58:30 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2521465429' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 18:58:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 25 18:58:30 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3747309463' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 18:58:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v894: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 18:58:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 18:58:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 25 18:58:30 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3028224869' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 18:58:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 25 18:58:30 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/434467421' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013628 2 0.000043
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000042
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000179 1 0.000067
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000028 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000014
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000030
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013164 2 0.000060
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012811 2 0.000109
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000034 1 0.000039
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000039
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.1( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012391 2 0.000105
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.1( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.1( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.1( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.563383 17 0.000123
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.568710 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.568837 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.568867 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000010
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436471939s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.733673096s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436425209s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.733673096s@ mbc={}] exit Reset 0.000085 1 0.000132
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436425209s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.733673096s@ mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436425209s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.733673096s@ mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436425209s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.733673096s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436425209s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.733673096s@ mbc={}] exit Start 0.000013 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436425209s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.733673096s@ mbc={}] enter Started/Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000042
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000009
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000032
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000271 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000023 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000045
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013111 2 0.000058
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000033
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000041
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000011 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000029 1 0.000022
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.013066 2 0.000070
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000013 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000032
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000050 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000043
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000205 1 0.000115
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000095 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000035
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000136 1 0.000088
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013970 2 0.000062
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000017 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000060 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000025
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000119 1 0.000077
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000082 1 0.000037
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000056 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000033
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000019 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000101 1 0.000073
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000025
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000020 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000125 1 0.000073
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.019024 2 0.000027
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.015595 2 0.000057
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000026
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000023 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000117 1 0.000071
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000027
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000128 1 0.000075
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000019
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000085 1 0.000034
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000014 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000027
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000039 1 0.000036
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000013
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000277 1 0.000031
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016128 2 0.000026
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024992 2 0.000067
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024081 2 0.000090
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.023676 2 0.000071
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.023415 2 0.000059
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021783 2 0.000106
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021299 2 0.000091
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021006 2 0.000047
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017882 2 0.000079
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017303 2 0.000070
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017214 2 0.000029
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000017 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016946 2 0.000032
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016710 2 0.000025
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017635 2 0.000026
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017222 2 0.000027
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016885 2 0.000016
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016695 2 0.000019
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016281 2 0.000082
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015809 2 0.000074
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017147 2 0.000027
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017413 2 0.000026
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014931 2 0.000203
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013974 2 0.000070
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012868 2 0.000024
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015466 2 0.000056
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013478 2 0.000055
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012409 2 0.000039
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013219 2 0.000057
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014784 2 0.000074
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013147 2 0.000021
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 42 heartbeat osd_stat(store_statfs(0x4fe0ef000/0x0/0x4ffc00000, data 0x9c035/0xde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58130432 unmapped: 1499136 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 42 handle_osd_map epochs [42,43], i have 42, src has [1,43]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 42 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 42 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972849 2 0.000036
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986264 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972598 2 0.000396
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972971 2 0.000037
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985850 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985702 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972895 2 0.000534
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987835 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973322 2 0.000032
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976882 2 0.000027
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987483 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993115 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973919 2 0.000063
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989957 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973761 2 0.000052
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989415 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973931 2 0.000026
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986911 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974025 2 0.000031
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987671 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974229 2 0.000055
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989302 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974408 2 0.000024
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991632 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974755 2 0.000034
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992063 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974473 2 0.000278
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991954 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976000 2 0.000041
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992810 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976053 2 0.000067
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993105 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974937 2 0.000060
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992674 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976294 2 0.000028
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993821 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976220 2 0.000065
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993549 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976680 2 0.000060
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994671 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975048 2 0.000035
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991589 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975132 2 0.000066
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992097 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978129 2 0.000053
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999333 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978714 2 0.000075
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978744 2 0.000043
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000933 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000297 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978948 2 0.000032
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002506 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975931 2 0.000037
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992705 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979224 2 0.000032
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979170 2 0.000031
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003926 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.003024 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979368 2 0.000089
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004590 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991849 2 0.000093
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006026 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989807 2 0.001072
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.005629 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993223 2 0.000033
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.006534 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993654 2 0.000082
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006915 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994667 2 0.000040
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007285 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995044 2 0.000026
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008082 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995218 2 0.000028
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008553 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996097 2 0.000039
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.008951 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996438 2 0.000088
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009360 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990473 2 0.000089
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.009673 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996277 2 0.000069
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010081 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996524 2 0.000448
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.011004 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010266 4 0.000125
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010342 4 0.000192
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010299 4 0.000146
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010194 4 0.000063
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014016 7 0.000056
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000348 1 0.000055
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019871 4 0.000080
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019334 4 0.000093
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020283 4 0.000108
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019848 4 0.000056
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020307 4 0.000129
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020034 4 0.000059
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019904 4 0.000071
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019853 4 0.000044
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019888 4 0.000174
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019765 4 0.000040
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019732 4 0.000038
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019723 4 0.000038
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019691 4 0.000039
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019688 4 0.000216
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019670 4 0.000039
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019493 4 0.000055
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019424 4 0.000050
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018920 4 0.000059
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018916 4 0.000161
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019481 4 0.000083
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018769 4 0.000058
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018723 4 0.000100
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018533 4 0.000087
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018412 4 0.000083
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018709 4 0.000166
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018389 4 0.000058
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.018663 5 0.000273
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018338 4 0.000058
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018242 4 0.000085
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020152 4 0.000106
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.018425 5 0.000168
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018196 4 0.000044
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000197 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017913 4 0.000059
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.018158 5 0.000079
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018035 4 0.000104
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.018024 5 0.000162
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.017848 4 0.000233
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.019038 5 0.000429
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000610 1 0.000589
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020544 7 0.000107
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019114 7 0.000114
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018970 7 0.000121
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018830 7 0.000075
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018847 7 0.000053
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018575 7 0.000094
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017748 7 0.000122
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019474 7 0.000096
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019117 7 0.000124
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018433 7 0.000099
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017670 7 0.000156
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017689 7 0.000102
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018111 7 0.000434
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017214 7 0.000058
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017371 7 0.000054
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017131 7 0.000119
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017539 7 0.000082
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018023 7 0.000255
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019288 7 0.000121
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014176 1 0.000071
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.014579 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.028636 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023852 7 0.000119
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029974 7 0.001545
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026382 7 0.000118
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026767 7 0.000141
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026953 7 0.000069
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029906 7 0.000132
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027607 7 0.000077
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028332 7 0.000182
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028700 7 0.000105
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027369 7 0.000130
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027202 7 0.000078
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029064 7 0.000121
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028138 7 0.000331
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028823 7 0.000091
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029416 7 0.000192
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029241 7 0.000472
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027491 7 0.000106
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030989 7 0.000041
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027538 7 0.000115
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028741 7 0.000141
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029786 7 0.000457
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.015368 1 0.000067
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.015984 1 0.000535
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000011 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.878012657s of 10.115623474s, submitted: 499
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.292794 1 0.000079
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.308847 1 0.000497
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000018 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.144758 1 0.000109
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.453646 1 0.000449
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000018 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.141584 1 0.000100
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.594948 1 0.000047
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000011 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.667516 2 0.000426
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.072085 1 0.000116
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000036 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.215949 1 0.000126
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.881819 1 0.000027
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000026 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.881980 1 0.000012
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.881790 1 0.000021
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.881795 1 0.000054
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.881900 1 0.000061
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.881956 1 0.000020
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882008 1 0.000013
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882002 1 0.000077
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882069 1 0.000013
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882111 1 0.000013
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882168 1 0.000012
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882233 1 0.000014
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882422 1 0.000122
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882442 1 0.000014
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882509 1 0.000015
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882751 1 0.000031
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882845 1 0.000015
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.883069 1 0.000027
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882535 1 0.000882
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874821 1 0.000035
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874249 1 0.000032
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874300 1 0.000025
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874340 1 0.000025
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874411 1 0.000030
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874488 1 0.000044
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874053 1 0.000036
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874157 1 0.000024
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874217 1 0.000019
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874288 1 0.000019
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874419 1 0.000020
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874459 1 0.000028
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874535 1 0.000017
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874636 1 0.000016
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874834 1 0.000020
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874937 1 0.000017
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.875027 1 0.000017
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.875211 1 0.000029
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.875292 1 0.000019
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.875350 1 0.000017
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.875400 1 0.000017
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.010538 1 0.000122
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.892456 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.913034 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014778 1 0.000033
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.896792 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.915965 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58540032 unmapped: 1089536 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022306 1 0.000030
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.904140 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.923161 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029560 1 0.000025
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.911424 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.930306 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036949 1 0.000035
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.918955 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.937559 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044335 1 0.000118
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.926285 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.945159 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051643 1 0.000025
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.933687 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.951509 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058711 1 0.000033
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.940814 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.958586 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066151 1 0.000090
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.948188 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.966746 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.073471 1 0.000023
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.955603 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.973370 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.080780 1 0.000075
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.962982 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.981444 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088138 1 0.000066
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.970424 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.987666 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.095405 1 0.000112
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.977862 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.997061 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.102733 1 0.000054
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.985210 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.002605 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 43 handle_osd_map epochs [44,44], i have 43, src has [1,44]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.113388 1 0.000021
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.995963 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.013593 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000123 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000050 1 0.000089
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000592 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000276 1 0.000890
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000080 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000030
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000154 1 0.000169
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000085 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000027
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000019 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000101 1 0.000074
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000086 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000046
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000155 1 0.000083
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002173 2 0.000166
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000034 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002901 2 0.000064
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002560 2 0.000045
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002105 2 0.000071
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.125351 4 0.000106
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.125247 4 0.000194
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.008153 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.008167 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.026228 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.025368 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.131524 4 0.000047
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.014630 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.033983 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.138779 4 0.000039
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.021358 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.041735 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.146142 4 0.000157
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.021055 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.044960 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.153617 4 0.000052
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.027914 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.057928 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.160890 4 0.000025
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.035260 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.061731 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.168064 4 0.000043
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.042522 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.072494 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.175533 4 0.000095
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.049949 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.076932 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.182743 4 0.000026
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.057266 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.084095 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.190368 4 0.000101
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.064493 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.092172 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.197436 4 0.000056
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.071647 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.100144 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.204688 4 0.000029
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.078945 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.107719 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.211860 4 0.000037
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.086324 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.113587 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.219797 4 0.000171
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.094143 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.121573 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.c( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.226741 4 0.000032
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.c( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.101234 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.c( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.130324 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.234042 4 0.000107
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.108629 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.137036 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.241354 4 0.000077
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.116059 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.145522 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.248572 4 0.000051
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.123471 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.152340 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.255776 4 0.000102
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.130764 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.160035 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.263202 4 0.000060
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.138323 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.165859 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.270339 4 0.000090
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.145687 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.176729 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.277439 4 0.000046
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.152780 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.180392 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.284997 4 0.000036
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.160394 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.189222 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.12( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.292135 4 0.000023
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.12( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.167576 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.12( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.197395 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58531840 unmapped: 2146304 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 362176 data_alloc: 218103808 data_used: 0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 44 handle_osd_map epochs [44,45], i have 44, src has [1,45]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 44 handle_osd_map epochs [44,45], i have 45, src has [1,45]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997991 2 0.000138
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000348 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998089 2 0.000126
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.000929 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998399 2 0.000067
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.001585 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999866 2 0.000238
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002524 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004684 3 0.000171
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000027 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 45 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011059 4 0.000129
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.011139 4 0.000415
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.011617 4 0.000274
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000148 1 0.000110
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000017 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.016560 2 0.000089
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.016599 2 0.000176
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000012 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.064804 1 0.000109
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58703872 unmapped: 1974272 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 45 handle_osd_map epochs [45,46], i have 45, src has [1,46]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 3.006733 7 0.000077
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 3.041584 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 4.044637 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 4.044694 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976484299s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309509277s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976387978s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309509277s@ mbc={}] exit Reset 0.000140 1 0.000204
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976387978s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309509277s@ mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976387978s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309509277s@ mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976387978s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309509277s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976387978s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309509277s@ mbc={}] exit Start 0.000016 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976387978s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309509277s@ mbc={}] enter Started/Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 2.355219 7 0.000231
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 3.041575 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 4.047231 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 4.047271 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976278305s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309593201s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976189613s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309593201s@ mbc={}] exit Reset 0.000141 1 0.000207
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976189613s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309593201s@ mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976189613s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309593201s@ mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976189613s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309593201s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976189613s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309593201s@ mbc={}] exit Start 0.000019 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976189613s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309593201s@ mbc={}] enter Started/Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 2.569489 7 0.000068
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 3.041406 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 4.050376 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 4.050400 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976170540s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309707642s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 2.139547 7 0.000224
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 3.041204 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 4.052237 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 4.052277 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976113319s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309707642s@ mbc={}] exit Reset 0.000091 1 0.000133
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976113319s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309707642s@ mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976113319s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309707642s@ mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976113319s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309707642s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976114273s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309745789s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976113319s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309707642s@ mbc={}] exit Start 0.000015 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976113319s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309707642s@ mbc={}] enter Started/Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976060867s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309745789s@ mbc={}] exit Reset 0.000085 1 0.000124
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976060867s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309745789s@ mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976060867s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309745789s@ mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976060867s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309745789s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976060867s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309745789s@ mbc={}] exit Start 0.000014 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976060867s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309745789s@ mbc={}] enter Started/Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 46 handle_osd_map epochs [46,46], i have 46, src has [1,46]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58810368 unmapped: 1867776 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 46 handle_osd_map epochs [46,47], i have 46, src has [1,47]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027789 7 0.000175
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027345 7 0.000266
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027590 7 0.000186
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 47 handle_osd_map epochs [47,47], i have 47, src has [1,47]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 47 handle_osd_map epochs [47,47], i have 47, src has [1,47]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027415 7 0.000116
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.017389 2 0.000092
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.017513 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000108 1 0.000263
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.076207 2 0.000058
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.076307 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000143 1 0.000281
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 DELETING pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.120835 2 0.000288
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.121059 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 1.166491 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 DELETING pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.135980 2 0.000305
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.136271 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 1.240287 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.284396 2 0.000050
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.284461 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000136 1 0.000146
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.409841 2 0.000063
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.409886 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000155 1 0.000087
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 DELETING pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.135139 2 0.000246
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.135553 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 1.447544 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 DELETING pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.032028 2 0.000215
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.032286 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 1.469667 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58703872 unmapped: 1974272 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 47 heartbeat osd_stat(store_statfs(0x4fe0dd000/0x0/0x4ffc00000, data 0xa2a3b/0xef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58703872 unmapped: 1974272 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58703872 unmapped: 1974272 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 356691 data_alloc: 218103808 data_used: 0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 47 heartbeat osd_stat(store_statfs(0x4fe0dd000/0x0/0x4ffc00000, data 0xa2a3b/0xef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58728448 unmapped: 1949696 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58728448 unmapped: 1949696 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 47 handle_osd_map epochs [47,48], i have 47, src has [1,48]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58736640 unmapped: 1941504 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 48 heartbeat osd_stat(store_statfs(0x4fe0db000/0x0/0x4ffc00000, data 0xa403e/0xf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=0 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000104 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=0 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000045
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000202 1 0.000074
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=0 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=0 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000017
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000117 1 0.000044
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001091 2 0.000068
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetLog 0.000844 2 0.000054
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.566368103s of 10.003333092s, submitted: 153
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58736640 unmapped: 1941504 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.10 deep-scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.10 deep-scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 48 handle_osd_map epochs [49,49], i have 49, src has [1,49]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.593731 2 0.000086
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.593373 2 0.000077
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.595099 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering 1.594402 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 unknown m=4 mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 11.508167 17 0.000105
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 11.835573 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 12.842137 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 12.842177 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182587624s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active pruub 97.309936523s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.309936523s@ mbc={}] exit Reset 0.000124 1 0.000180
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.309936523s@ mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.309936523s@ mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.309936523s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.309936523s@ mbc={}] exit Start 0.000014 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.309936523s@ mbc={}] enter Started/Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 11.221861 17 0.000095
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 11.835303 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 12.844997 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 12.845050 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182564735s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active pruub 97.310157776s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.310157776s@ mbc={}] exit Reset 0.000092 1 0.000118
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.310157776s@ mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.310157776s@ mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.310157776s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.310157776s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.310157776s@ mbc={}] enter Started/Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.004755 5 0.000240
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.004849 4 0.000186
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000107 1 0.000046
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.040860 1 0.000050
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.041074 2 0.000022
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 59826176 unmapped: 851968 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 372598 data_alloc: 218103808 data_used: 8192
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.253074 1 0.000070
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.13 deep-scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.13 deep-scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013354 6 0.000119
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013583 6 0.000143
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.070249 3 0.000075
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.070299 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000126 1 0.000097
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 753664 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.196211 3 0.000093
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.196264 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000158 1 0.000095
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 DELETING pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.130512 2 0.000237
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.130693 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 1.214421 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 DELETING pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.023458 2 0.000192
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.023695 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 1.233629 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 50 heartbeat osd_stat(store_statfs(0x4fe0d6000/0x0/0x4ffc00000, data 0xa5c65/0xf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 753664 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 745472 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 59940864 unmapped: 737280 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 50 handle_osd_map epochs [51,51], i have 50, src has [1,51]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 712704 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 378620 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 52 heartbeat osd_stat(store_statfs(0x4fe0d4000/0x0/0x4ffc00000, data 0xa7035/0xf9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 671744 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 52 heartbeat osd_stat(store_statfs(0x4fe0cd000/0x0/0x4ffc00000, data 0xa9c3b/0xff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 557056 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 19.911372 33 0.000110
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 19.929968 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 20.934610 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 20.934659 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.088303566s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 105.309837341s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.087999344s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.309837341s@ mbc={}] exit Reset 0.000355 1 0.000410
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.087999344s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.309837341s@ mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.087999344s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.309837341s@ mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.087999344s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.309837341s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.087999344s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.309837341s@ mbc={}] exit Start 0.000119 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.087999344s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.309837341s@ mbc={}] enter Started/Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 54 handle_osd_map epochs [53,54], i have 54, src has [1,54]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 540672 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.732617378s of 10.148954391s, submitted: 40
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 54 heartbeat osd_stat(store_statfs(0x4fe0c7000/0x0/0x4ffc00000, data 0xac841/0x105000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 540672 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.502368 6 0.000504
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000717 1 0.000084
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=32'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 19.409195 31 0.000174
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 19.414038 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 20.414417 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 20.414457 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590789795s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 107.316719055s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590709686s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.316719055s@ mbc={}] exit Reset 0.000164 1 0.000248
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590709686s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.316719055s@ mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590709686s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.316719055s@ mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590709686s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.316719055s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590709686s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.316719055s@ mbc={}] exit Start 0.000016 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590709686s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.316719055s@ mbc={}] enter Started/Stray
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 55 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.013391 2 0.000105
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.014206 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.516826 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 491520 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 390250 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.986485 6 0.000159
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000271 1 0.000054
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] lb MIN local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.003829 2 0.000121
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] lb MIN local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.004189 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] lb MIN local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.990739 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 450560 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61284352 unmapped: 442368 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=0 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000092 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=0 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000041
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000153 1 0.000065
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001320 2 0.000137
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000034 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 401408 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 57 handle_osd_map epochs [57,58], i have 57, src has [1,58]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 57 handle_osd_map epochs [58,58], i have 58, src has [1,58]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.166969 2 0.000268
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.168649 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.005532 4 0.000229
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000232 1 0.000306
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000053 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.012069 2 0.000279
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000054 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 385024 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 58 heartbeat osd_stat(store_statfs(0x4fe0bb000/0x0/0x4ffc00000, data 0xb1ed2/0x111000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 385024 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 401296 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d(unlocked)] enter Initial
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=0 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000086 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=0 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000049
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000024 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000226 1 0.000093
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 59 handle_osd_map epochs [59,59], i have 59, src has [1,59]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000961 2 0.000073
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000036 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61390848 unmapped: 335872 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 59 handle_osd_map epochs [59,60], i have 60, src has [1,60]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.417097 2 0.000301
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.418506 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002997 3 0.000495
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000086 1 0.000073
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 60 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.102074 3 0.000038
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 319488 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61440000 unmapped: 286720 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 61 heartbeat osd_stat(store_statfs(0x4fe0b0000/0x0/0x4ffc00000, data 0xb5f8e/0x11b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61480960 unmapped: 245760 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61489152 unmapped: 237568 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416683 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 61 handle_osd_map epochs [62,63], i have 61, src has [1,63]
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.581572533s of 11.712023735s, submitted: 33
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 1146880 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61636608 unmapped: 1138688 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 1130496 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61652992 unmapped: 1122304 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.12 deep-scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.12 deep-scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 1114112 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 424817 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 1114112 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 1105920 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1089536 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1089536 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1081344 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 425965 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.323520660s of 10.366091728s, submitted: 11
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1064960 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1064960 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 1048576 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 1048576 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 1040384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 429409 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 1040384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 1040384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 1032192 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 1032192 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 1007616 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 431705 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 1007616 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 1007616 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 999424 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 999424 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.960755348s of 13.993970871s, submitted: 10
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 983040 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:30 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432852 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 974848 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 966656 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 966656 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 966656 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 958464 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432852 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61825024 unmapped: 950272 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 933888 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 925696 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 925696 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 917504 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 433999 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 917504 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 917504 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.131777763s of 13.148053169s, submitted: 4
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 909312 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 909312 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61874176 unmapped: 901120 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 435146 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61874176 unmapped: 901120 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 892928 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 892928 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 884736 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 860160 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 436293 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 851968 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 851968 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.016060829s of 10.034604073s, submitted: 6
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 843776 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 843776 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 835584 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 439734 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 827392 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 811008 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 811008 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 811008 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61972480 unmapped: 802816 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 440882 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61972480 unmapped: 802816 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 794624 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 794624 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.789491653s of 10.819688797s, submitted: 6
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 786432 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 786432 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 442029 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 786432 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 778240 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 778240 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 778240 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 761856 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 444324 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 761856 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 737280 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 737280 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 729088 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 729088 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 446620 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.852318764s of 11.895004272s, submitted: 10
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 712704 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 704512 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 704512 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 696320 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 679936 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 448916 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 671744 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 671744 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62119936 unmapped: 655360 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62119936 unmapped: 655360 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62136320 unmapped: 638976 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 452360 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 630784 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 630784 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 630784 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62152704 unmapped: 622592 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 614400 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 453508 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 606208 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.911233902s of 15.955444336s, submitted: 12
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 589824 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 589824 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 581632 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 581632 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454655 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 573440 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 573440 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 565248 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 557056 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 557056 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454655 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 548864 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 548864 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.033292770s of 11.040759087s, submitted: 2
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 548864 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 540672 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 540672 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 455802 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 516096 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 516096 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62267392 unmapped: 507904 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62267392 unmapped: 507904 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 499712 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458096 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 491520 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 491520 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 483328 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 475136 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.077341080s of 12.105804443s, submitted: 8
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 458752 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 460390 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 450560 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 450560 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 442368 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 442368 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 460390 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 442368 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.16 deep-scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.16 deep-scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 425984 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 425984 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 417792 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 417792 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462685 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 409600 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 409600 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 409600 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 401408 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 401408 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462685 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 393216 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 393216 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 393216 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 385024 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 385024 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462685 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62398464 unmapped: 376832 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 360448 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 360448 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.019342422s of 23.038656235s, submitted: 6
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 344064 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 344064 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464979 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 327680 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 327680 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 327680 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 319488 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 319488 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464979 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 311296 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 311296 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 311296 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 303104 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 303104 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.775456429s of 12.810349464s, submitted: 6
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 467273 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 294912 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1a deep-scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1a deep-scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62488576 unmapped: 286720 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 270336 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 262144 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62529536 unmapped: 245760 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 470716 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 237568 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 237568 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 229376 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 221184 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 221184 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 470716 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 204800 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 204800 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 204800 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 196608 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 196608 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.953562737s of 14.978596687s, submitted: 8
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 471864 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 196608 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 188416 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 188416 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 180224 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 180224 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473011 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 163840 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 155648 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 155648 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 147456 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 147456 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473011 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 131072 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 131072 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 131072 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.053456306s of 13.070212364s, submitted: 4
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 114688 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 114688 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 474158 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 114688 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 106496 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 106496 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 90112 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 90112 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476452 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 73728 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 73728 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 73728 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 65536 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.003563881s of 11.035881042s, submitted: 6
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 65536 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 477599 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 57344 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 40960 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 32768 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 24576 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 24576 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 16384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 16384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 16384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 8192 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 8192 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 0 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 0 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1040384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1040384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1040384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1007616 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1007616 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 999424 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 999424 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 999424 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 991232 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 991232 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 983040 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 974848 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 974848 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 966656 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 966656 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 966656 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 958464 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 958464 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 950272 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 950272 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 950272 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 942080 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 942080 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 933888 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 933888 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 933888 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 925696 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 925696 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 925696 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 917504 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 909312 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 909312 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 909312 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 901120 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 901120 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 892928 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 892928 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 892928 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 884736 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 884736 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 876544 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 876544 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 876544 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 868352 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 868352 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 851968 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 851968 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 851968 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 843776 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 843776 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 835584 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 835584 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 827392 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 827392 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 827392 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 819200 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 819200 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 811008 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 811008 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 811008 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 802816 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 802816 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 786432 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 786432 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 786432 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 778240 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 778240 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 778240 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 770048 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 770048 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 761856 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 761856 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 761856 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 753664 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 753664 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 753664 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 745472 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 745472 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 737280 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 737280 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 729088 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 729088 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 729088 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 720896 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 720896 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 704512 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 704512 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 704512 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 696320 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 696320 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 688128 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 688128 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 688128 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 679936 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 679936 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 679936 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 671744 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 671744 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 663552 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 663552 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 663552 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 655360 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 655360 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 647168 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 647168 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 647168 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 638976 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 638976 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 638976 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 630784 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 630784 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 622592 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 622592 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 622592 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 614400 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 614400 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 614400 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 606208 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 606208 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 598016 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 598016 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 598016 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 589824 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 589824 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 589824 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 581632 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 581632 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 565248 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 565248 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 557056 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 557056 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 557056 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 548864 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 548864 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 540672 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 540672 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 540672 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 532480 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 532480 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 524288 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 524288 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 524288 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 516096 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 516096 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 516096 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 507904 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 507904 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 499712 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 499712 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 491520 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 483328 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 483328 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 483328 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 475136 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 475136 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 475136 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 466944 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 466944 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 458752 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 458752 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 450560 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 450560 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 450560 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 442368 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 434176 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 434176 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 425984 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 425984 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 417792 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 417792 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 417792 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 409600 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 409600 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 401408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 401408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 401408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 393216 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 393216 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 385024 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 385024 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 385024 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 376832 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 376832 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 376832 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 368640 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 368640 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 360448 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 360448 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 360448 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 352256 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 352256 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 352256 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 344064 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 344064 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 344064 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 335872 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 335872 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 327680 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 327680 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 327680 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 319488 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 319488 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 311296 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 311296 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 311296 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 303104 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 303104 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 294912 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 294912 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 294912 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 286720 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 286720 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 286720 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 278528 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 278528 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 278528 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 270336 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 270336 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 262144 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 262144 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 262144 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 253952 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 253952 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 253952 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 245760 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 245760 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 237568 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 237568 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 237568 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 229376 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 229376 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 221184 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 221184 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 212992 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 212992 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 212992 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 196608 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 196608 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 188416 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 188416 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 188416 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 180224 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 180224 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 172032 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 172032 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 172032 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 163840 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 163840 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 155648 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 155648 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 147456 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 139264 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 139264 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 139264 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 131072 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 131072 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 131072 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 122880 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 122880 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 114688 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 114688 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 106496 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 106496 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 106496 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 16.54 MB, 0.03 MB/s#012Interval WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 16384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 8192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 8192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 0 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 0 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1040384 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1040384 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1040384 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 1024000 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 1024000 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 1024000 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1007616 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1007616 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 999424 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 999424 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 999424 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 991232 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 991232 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 983040 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 983040 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 983040 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 974848 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 974848 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 974848 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 966656 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 966656 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 958464 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 958464 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 958464 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 950272 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 950272 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 942080 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 942080 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 942080 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 933888 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 933888 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 925696 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 925696 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 925696 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 917504 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 917504 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 917504 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 909312 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 909312 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 901120 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 901120 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 901120 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 892928 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 892928 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 892928 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 884736 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 884736 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 876544 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 876544 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 876544 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 868352 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 868352 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 860160 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 860160 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 860160 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 851968 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 851968 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 851968 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 843776 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 843776 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 835584 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 835584 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 835584 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 827392 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 827392 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 827392 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 819200 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 819200 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 802816 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 802816 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 802816 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 794624 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 794624 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 786432 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 786432 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 786432 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 778240 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 778240 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 778240 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 770048 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 770048 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 770048 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 761856 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 761856 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 745472 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 745472 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 737280 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 737280 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 729088 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 729088 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 720896 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 720896 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 720896 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 712704 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 712704 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: mgrc ms_handle_reset ms_handle_reset con 0x5613ea45dc00
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/855624559
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: mgrc handle_mgr_configure stats_period=5
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 ms_handle_reset con 0x5613eb896400 session 0x5613eb1c5860
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 ms_handle_reset con 0x5613eb897000 session 0x5613ebc8a5a0
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 18:58:31 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:04:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1075: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:04:33 np0005535838 rsyslogd[1001]: imjournal: 15556 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 25 19:04:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1076: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:04:36 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev d716486c-4ceb-42f0-8003-fa594302b4f3 does not exist
Nov 25 19:04:36 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev b61a43f8-0d52-40b3-9506-6815ebde114c does not exist
Nov 25 19:04:36 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 746eeba7-49a1-4897-9ae6-c0c43c563189 does not exist
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:04:36 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 19:04:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1077: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:04:36 np0005535838 podman[275260]: 2025-11-26 00:04:36.733228238 +0000 UTC m=+0.047355962 container create f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 19:04:36 np0005535838 systemd[1]: Started libpod-conmon-f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2.scope.
Nov 25 19:04:36 np0005535838 systemd[1]: Started libcrun container.
Nov 25 19:04:36 np0005535838 podman[275260]: 2025-11-26 00:04:36.712782664 +0000 UTC m=+0.026910418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 19:04:36 np0005535838 podman[275260]: 2025-11-26 00:04:36.819103296 +0000 UTC m=+0.133231050 container init f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 19:04:36 np0005535838 podman[275260]: 2025-11-26 00:04:36.827233342 +0000 UTC m=+0.141361066 container start f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 19:04:36 np0005535838 podman[275260]: 2025-11-26 00:04:36.830688774 +0000 UTC m=+0.144816498 container attach f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 19:04:36 np0005535838 quizzical_visvesvaraya[275276]: 167 167
Nov 25 19:04:36 np0005535838 systemd[1]: libpod-f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2.scope: Deactivated successfully.
Nov 25 19:04:36 np0005535838 podman[275260]: 2025-11-26 00:04:36.83503697 +0000 UTC m=+0.149164704 container died f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 19:04:36 np0005535838 systemd[1]: var-lib-containers-storage-overlay-39994a60c336645d3802a0dfb20014844ebffbac699a1c483c11d8a572a753cd-merged.mount: Deactivated successfully.
Nov 25 19:04:36 np0005535838 podman[275260]: 2025-11-26 00:04:36.881663252 +0000 UTC m=+0.195791006 container remove f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 19:04:36 np0005535838 systemd[1]: libpod-conmon-f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2.scope: Deactivated successfully.
Nov 25 19:04:37 np0005535838 podman[275298]: 2025-11-26 00:04:37.109652045 +0000 UTC m=+0.045496103 container create 8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 19:04:37 np0005535838 systemd[1]: Started libpod-conmon-8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e.scope.
Nov 25 19:04:37 np0005535838 podman[275298]: 2025-11-26 00:04:37.088355068 +0000 UTC m=+0.024199166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 19:04:37 np0005535838 systemd[1]: Started libcrun container.
Nov 25 19:04:37 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ba6924dd64c08c24f59ce07831a5b44f398fff3d8a33e1ee1ad5d223a56708/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 19:04:37 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ba6924dd64c08c24f59ce07831a5b44f398fff3d8a33e1ee1ad5d223a56708/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 19:04:37 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ba6924dd64c08c24f59ce07831a5b44f398fff3d8a33e1ee1ad5d223a56708/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 19:04:37 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ba6924dd64c08c24f59ce07831a5b44f398fff3d8a33e1ee1ad5d223a56708/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 19:04:37 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ba6924dd64c08c24f59ce07831a5b44f398fff3d8a33e1ee1ad5d223a56708/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 19:04:37 np0005535838 podman[275298]: 2025-11-26 00:04:37.220456156 +0000 UTC m=+0.156300324 container init 8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 19:04:37 np0005535838 podman[275298]: 2025-11-26 00:04:37.23187062 +0000 UTC m=+0.167714718 container start 8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 19:04:37 np0005535838 podman[275298]: 2025-11-26 00:04:37.236321919 +0000 UTC m=+0.172165987 container attach 8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 19:04:38 np0005535838 quizzical_banach[275314]: --> passed data devices: 0 physical, 3 LVM
Nov 25 19:04:38 np0005535838 quizzical_banach[275314]: --> relative data size: 1.0
Nov 25 19:04:38 np0005535838 quizzical_banach[275314]: --> All data devices are unavailable
Nov 25 19:04:38 np0005535838 systemd[1]: libpod-8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e.scope: Deactivated successfully.
Nov 25 19:04:38 np0005535838 podman[275298]: 2025-11-26 00:04:38.366594694 +0000 UTC m=+1.302438792 container died 8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 19:04:38 np0005535838 systemd[1]: libpod-8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e.scope: Consumed 1.085s CPU time.
Nov 25 19:04:38 np0005535838 systemd[1]: var-lib-containers-storage-overlay-b5ba6924dd64c08c24f59ce07831a5b44f398fff3d8a33e1ee1ad5d223a56708-merged.mount: Deactivated successfully.
Nov 25 19:04:38 np0005535838 podman[275298]: 2025-11-26 00:04:38.42463872 +0000 UTC m=+1.360482788 container remove 8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 19:04:38 np0005535838 systemd[1]: libpod-conmon-8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e.scope: Deactivated successfully.
Nov 25 19:04:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1078: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:04:39 np0005535838 podman[275496]: 2025-11-26 00:04:39.172360386 +0000 UTC m=+0.049375116 container create 80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lovelace, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 19:04:39 np0005535838 systemd[1]: Started libpod-conmon-80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db.scope.
Nov 25 19:04:39 np0005535838 podman[275496]: 2025-11-26 00:04:39.146375175 +0000 UTC m=+0.023389965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 19:04:39 np0005535838 systemd[1]: Started libcrun container.
Nov 25 19:04:39 np0005535838 podman[275496]: 2025-11-26 00:04:39.260466463 +0000 UTC m=+0.137481183 container init 80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lovelace, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 19:04:39 np0005535838 podman[275496]: 2025-11-26 00:04:39.269304259 +0000 UTC m=+0.146318959 container start 80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 19:04:39 np0005535838 podman[275496]: 2025-11-26 00:04:39.271850396 +0000 UTC m=+0.148865116 container attach 80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lovelace, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 25 19:04:39 np0005535838 focused_lovelace[275512]: 167 167
Nov 25 19:04:39 np0005535838 systemd[1]: libpod-80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db.scope: Deactivated successfully.
Nov 25 19:04:39 np0005535838 conmon[275512]: conmon 80a576eee938f255481f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db.scope/container/memory.events
Nov 25 19:04:39 np0005535838 podman[275496]: 2025-11-26 00:04:39.27646726 +0000 UTC m=+0.153482000 container died 80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lovelace, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 19:04:39 np0005535838 systemd[1]: var-lib-containers-storage-overlay-31c1713c5080e2f5a7f23a07600e47ec2120cbf452104e58d41704ff34c8b672-merged.mount: Deactivated successfully.
Nov 25 19:04:39 np0005535838 podman[275496]: 2025-11-26 00:04:39.37974455 +0000 UTC m=+0.256759250 container remove 80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lovelace, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 19:04:39 np0005535838 systemd[1]: libpod-conmon-80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db.scope: Deactivated successfully.
Nov 25 19:04:39 np0005535838 podman[275517]: 2025-11-26 00:04:39.433059941 +0000 UTC m=+0.062742213 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 19:04:39 np0005535838 podman[275533]: 2025-11-26 00:04:39.460789369 +0000 UTC m=+0.086625949 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 19:04:39 np0005535838 podman[275581]: 2025-11-26 00:04:39.543223674 +0000 UTC m=+0.041343221 container create ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 19:04:39 np0005535838 systemd[1]: Started libpod-conmon-ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf.scope.
Nov 25 19:04:39 np0005535838 systemd[1]: Started libcrun container.
Nov 25 19:04:39 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3d0cf41169eb06face8f4093f6a7d204bbadc8478620d375668b9aeabc457a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 19:04:39 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3d0cf41169eb06face8f4093f6a7d204bbadc8478620d375668b9aeabc457a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 19:04:39 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3d0cf41169eb06face8f4093f6a7d204bbadc8478620d375668b9aeabc457a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 19:04:39 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3d0cf41169eb06face8f4093f6a7d204bbadc8478620d375668b9aeabc457a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 19:04:39 np0005535838 podman[275581]: 2025-11-26 00:04:39.524595679 +0000 UTC m=+0.022715216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 19:04:39 np0005535838 podman[275581]: 2025-11-26 00:04:39.633350496 +0000 UTC m=+0.131470073 container init ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cannon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 19:04:39 np0005535838 podman[275581]: 2025-11-26 00:04:39.644962135 +0000 UTC m=+0.143081692 container start ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cannon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 19:04:39 np0005535838 podman[275581]: 2025-11-26 00:04:39.648868809 +0000 UTC m=+0.146988356 container attach ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cannon, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]: {
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:    "0": [
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:        {
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "devices": [
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "/dev/loop3"
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            ],
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_name": "ceph_lv0",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_size": "21470642176",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "name": "ceph_lv0",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "tags": {
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.cephx_lockbox_secret": "",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.cluster_name": "ceph",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.crush_device_class": "",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.encrypted": "0",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.osd_id": "0",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.type": "block",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.vdo": "0"
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            },
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "type": "block",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "vg_name": "ceph_vg0"
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:        }
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:    ],
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:    "1": [
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:        {
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "devices": [
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "/dev/loop4"
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            ],
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_name": "ceph_lv1",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_size": "21470642176",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "name": "ceph_lv1",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "tags": {
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.cephx_lockbox_secret": "",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.cluster_name": "ceph",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.crush_device_class": "",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.encrypted": "0",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.osd_id": "1",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.type": "block",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.vdo": "0"
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            },
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "type": "block",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "vg_name": "ceph_vg1"
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:        }
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:    ],
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:    "2": [
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:        {
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "devices": [
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "/dev/loop5"
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            ],
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_name": "ceph_lv2",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_size": "21470642176",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "name": "ceph_lv2",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "tags": {
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.cephx_lockbox_secret": "",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.cluster_name": "ceph",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.crush_device_class": "",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.encrypted": "0",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.osd_id": "2",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.type": "block",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:                "ceph.vdo": "0"
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            },
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "type": "block",
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:            "vg_name": "ceph_vg2"
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:        }
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]:    ]
Nov 25 19:04:40 np0005535838 lucid_cannon[275598]: }
Nov 25 19:04:40 np0005535838 systemd[1]: libpod-ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf.scope: Deactivated successfully.
Nov 25 19:04:40 np0005535838 podman[275581]: 2025-11-26 00:04:40.375900134 +0000 UTC m=+0.874019691 container died ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cannon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 19:04:40 np0005535838 systemd[1]: var-lib-containers-storage-overlay-ef3d0cf41169eb06face8f4093f6a7d204bbadc8478620d375668b9aeabc457a-merged.mount: Deactivated successfully.
Nov 25 19:04:40 np0005535838 podman[275581]: 2025-11-26 00:04:40.48087398 +0000 UTC m=+0.978993527 container remove ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 19:04:40 np0005535838 systemd[1]: libpod-conmon-ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf.scope: Deactivated successfully.
Nov 25 19:04:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1079: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:04:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-26 00:04:40.772 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 19:04:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-26 00:04:40.774 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 19:04:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-26 00:04:40.774 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 19:04:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:04:41 np0005535838 podman[275761]: 2025-11-26 00:04:41.278619199 +0000 UTC m=+0.070156720 container create bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 19:04:41 np0005535838 systemd[1]: Started libpod-conmon-bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3.scope.
Nov 25 19:04:41 np0005535838 podman[275761]: 2025-11-26 00:04:41.25127612 +0000 UTC m=+0.042813741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 19:04:41 np0005535838 systemd[1]: Started libcrun container.
Nov 25 19:04:41 np0005535838 podman[275761]: 2025-11-26 00:04:41.382100375 +0000 UTC m=+0.173637926 container init bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ellis, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 19:04:41 np0005535838 podman[275761]: 2025-11-26 00:04:41.394215207 +0000 UTC m=+0.185752728 container start bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ellis, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 19:04:41 np0005535838 podman[275761]: 2025-11-26 00:04:41.397567827 +0000 UTC m=+0.189105348 container attach bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 19:04:41 np0005535838 busy_ellis[275777]: 167 167
Nov 25 19:04:41 np0005535838 systemd[1]: libpod-bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3.scope: Deactivated successfully.
Nov 25 19:04:41 np0005535838 podman[275761]: 2025-11-26 00:04:41.402644431 +0000 UTC m=+0.194181952 container died bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ellis, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 19:04:41 np0005535838 systemd[1]: var-lib-containers-storage-overlay-d2bb2d82bb17591914721d54a43636ce88c5ca640a926e4a83dd489f3699ba8a-merged.mount: Deactivated successfully.
Nov 25 19:04:41 np0005535838 podman[275761]: 2025-11-26 00:04:41.456591249 +0000 UTC m=+0.248128800 container remove bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 19:04:41 np0005535838 systemd[1]: libpod-conmon-bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3.scope: Deactivated successfully.
Nov 25 19:04:41 np0005535838 podman[275799]: 2025-11-26 00:04:41.676015853 +0000 UTC m=+0.062315781 container create d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jemison, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 19:04:41 np0005535838 systemd[1]: Started libpod-conmon-d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b.scope.
Nov 25 19:04:41 np0005535838 podman[275799]: 2025-11-26 00:04:41.651538241 +0000 UTC m=+0.037838269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 19:04:41 np0005535838 systemd[1]: Started libcrun container.
Nov 25 19:04:41 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e081b27e5b1b3b4d65e0fbbc56034855bef3a7e8f3e0649a087a21b0416e12a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 19:04:41 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e081b27e5b1b3b4d65e0fbbc56034855bef3a7e8f3e0649a087a21b0416e12a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 19:04:41 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e081b27e5b1b3b4d65e0fbbc56034855bef3a7e8f3e0649a087a21b0416e12a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 19:04:41 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e081b27e5b1b3b4d65e0fbbc56034855bef3a7e8f3e0649a087a21b0416e12a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 19:04:41 np0005535838 podman[275799]: 2025-11-26 00:04:41.770625153 +0000 UTC m=+0.156925171 container init d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jemison, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 19:04:41 np0005535838 podman[275799]: 2025-11-26 00:04:41.777210839 +0000 UTC m=+0.163510797 container start d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jemison, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 19:04:41 np0005535838 podman[275799]: 2025-11-26 00:04:41.780988939 +0000 UTC m=+0.167288877 container attach d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jemison, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 19:04:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1080: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:04:42 np0005535838 practical_jemison[275815]: {
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "osd_id": 2,
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "type": "bluestore"
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:    },
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "osd_id": 1,
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "type": "bluestore"
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:    },
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "osd_id": 0,
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:        "type": "bluestore"
Nov 25 19:04:42 np0005535838 practical_jemison[275815]:    }
Nov 25 19:04:42 np0005535838 practical_jemison[275815]: }
Nov 25 19:04:42 np0005535838 systemd[1]: libpod-d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b.scope: Deactivated successfully.
Nov 25 19:04:42 np0005535838 systemd[1]: libpod-d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b.scope: Consumed 1.066s CPU time.
Nov 25 19:04:42 np0005535838 podman[275848]: 2025-11-26 00:04:42.891385726 +0000 UTC m=+0.031668315 container died d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jemison, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 19:04:42 np0005535838 systemd[1]: var-lib-containers-storage-overlay-2e081b27e5b1b3b4d65e0fbbc56034855bef3a7e8f3e0649a087a21b0416e12a-merged.mount: Deactivated successfully.
Nov 25 19:04:42 np0005535838 podman[275848]: 2025-11-26 00:04:42.9535011 +0000 UTC m=+0.093783659 container remove d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jemison, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 19:04:42 np0005535838 systemd[1]: libpod-conmon-d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b.scope: Deactivated successfully.
Nov 25 19:04:43 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 19:04:43 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:04:43 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 19:04:43 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:04:43 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev d4fbb68d-c484-4318-bf15-cf6cca10a9ec does not exist
Nov 25 19:04:44 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:04:44 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:04:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1081: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:04:46 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:04:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1082: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:04:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1083: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:04:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1084: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:04:51 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:04:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1085: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:04:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1086: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:04:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-26_00:04:56
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['images', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'backups', 'vms', 'cephfs.cephfs.data']
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 19:04:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1087: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:04:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1088: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1089: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:05:01 np0005535838 podman[275913]: 2025-11-26 00:05:01.258352964 +0000 UTC m=+0.083340421 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 19:05:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 19:05:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1090: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1091: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:05:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1092: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1093: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:08 np0005535838 nova_compute[252550]: 2025-11-26 00:05:08.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:05:10 np0005535838 podman[275935]: 2025-11-26 00:05:10.264606442 +0000 UTC m=+0.086021753 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 19:05:10 np0005535838 podman[275934]: 2025-11-26 00:05:10.310536615 +0000 UTC m=+0.133701073 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 19:05:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1094: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:11 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:05:11 np0005535838 nova_compute[252550]: 2025-11-26 00:05:11.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:05:11 np0005535838 nova_compute[252550]: 2025-11-26 00:05:11.853 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 19:05:11 np0005535838 nova_compute[252550]: 2025-11-26 00:05:11.854 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 19:05:11 np0005535838 nova_compute[252550]: 2025-11-26 00:05:11.854 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 19:05:11 np0005535838 nova_compute[252550]: 2025-11-26 00:05:11.855 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 19:05:11 np0005535838 nova_compute[252550]: 2025-11-26 00:05:11.855 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 19:05:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 19:05:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/543726678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 19:05:12 np0005535838 nova_compute[252550]: 2025-11-26 00:05:12.338 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 19:05:12 np0005535838 nova_compute[252550]: 2025-11-26 00:05:12.483 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 19:05:12 np0005535838 nova_compute[252550]: 2025-11-26 00:05:12.484 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5181MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 19:05:12 np0005535838 nova_compute[252550]: 2025-11-26 00:05:12.485 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 19:05:12 np0005535838 nova_compute[252550]: 2025-11-26 00:05:12.485 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 19:05:12 np0005535838 nova_compute[252550]: 2025-11-26 00:05:12.558 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 19:05:12 np0005535838 nova_compute[252550]: 2025-11-26 00:05:12.559 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 19:05:12 np0005535838 nova_compute[252550]: 2025-11-26 00:05:12.574 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 19:05:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1095: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:12 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 19:05:12 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4027728265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 19:05:13 np0005535838 nova_compute[252550]: 2025-11-26 00:05:13.008 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 19:05:13 np0005535838 nova_compute[252550]: 2025-11-26 00:05:13.013 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 19:05:13 np0005535838 nova_compute[252550]: 2025-11-26 00:05:13.028 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 19:05:13 np0005535838 nova_compute[252550]: 2025-11-26 00:05:13.030 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 19:05:13 np0005535838 nova_compute[252550]: 2025-11-26 00:05:13.031 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 19:05:14 np0005535838 nova_compute[252550]: 2025-11-26 00:05:14.028 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:05:14 np0005535838 nova_compute[252550]: 2025-11-26 00:05:14.028 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:05:14 np0005535838 nova_compute[252550]: 2025-11-26 00:05:14.050 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:05:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1096: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:14 np0005535838 nova_compute[252550]: 2025-11-26 00:05:14.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:05:14 np0005535838 nova_compute[252550]: 2025-11-26 00:05:14.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 19:05:16 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:05:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1097: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:16 np0005535838 nova_compute[252550]: 2025-11-26 00:05:16.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:05:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 19:05:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4172957434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 19:05:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 19:05:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4172957434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 19:05:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1098: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:18 np0005535838 nova_compute[252550]: 2025-11-26 00:05:18.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:05:18 np0005535838 nova_compute[252550]: 2025-11-26 00:05:18.823 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 19:05:18 np0005535838 nova_compute[252550]: 2025-11-26 00:05:18.824 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 19:05:18 np0005535838 nova_compute[252550]: 2025-11-26 00:05:18.841 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 19:05:19 np0005535838 nova_compute[252550]: 2025-11-26 00:05:19.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:05:19 np0005535838 nova_compute[252550]: 2025-11-26 00:05:19.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:05:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1099: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:05:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1100: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1101: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:05:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 19:05:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 19:05:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 19:05:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 19:05:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 19:05:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 19:05:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1102: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1103: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1104: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:05:32 np0005535838 podman[276022]: 2025-11-26 00:05:32.249230972 +0000 UTC m=+0.080875574 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 19:05:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1105: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1106: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.319639) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115535319735, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2056, "num_deletes": 251, "total_data_size": 2411298, "memory_usage": 2457768, "flush_reason": "Manual Compaction"}
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115535337283, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2329144, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20644, "largest_seqno": 22699, "table_properties": {"data_size": 2319784, "index_size": 5918, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18595, "raw_average_key_size": 19, "raw_value_size": 2301128, "raw_average_value_size": 2471, "num_data_blocks": 271, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764115304, "oldest_key_time": 1764115304, "file_creation_time": 1764115535, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 17685 microseconds, and 7966 cpu microseconds.
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.337337) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2329144 bytes OK
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.337356) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.339031) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.339045) EVENT_LOG_v1 {"time_micros": 1764115535339040, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.339062) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2402679, prev total WAL file size 2402679, number of live WAL files 2.
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.339827) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2274KB)], [50(5644KB)]
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115535339906, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 8109457, "oldest_snapshot_seqno": -1}
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4450 keys, 6889791 bytes, temperature: kUnknown
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115535383531, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 6889791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6856503, "index_size": 21082, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 106523, "raw_average_key_size": 23, "raw_value_size": 6773065, "raw_average_value_size": 1522, "num_data_blocks": 896, "num_entries": 4450, "num_filter_entries": 4450, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764115535, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.383792) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 6889791 bytes
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.389325) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.5 rd, 157.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 5.5 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 4964, records dropped: 514 output_compression: NoCompression
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.389355) EVENT_LOG_v1 {"time_micros": 1764115535389341, "job": 26, "event": "compaction_finished", "compaction_time_micros": 43707, "compaction_time_cpu_micros": 22403, "output_level": 6, "num_output_files": 1, "total_output_size": 6889791, "num_input_records": 4964, "num_output_records": 4450, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115535390279, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115535392118, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.339714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.392165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.392187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.392189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.392191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 19:05:35 np0005535838 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.392194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 19:05:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:05:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1107: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1108: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:40 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1109: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-26 00:05:40.773 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 19:05:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-26 00:05:40.773 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 19:05:40 np0005535838 ovn_metadata_agent[160720]: 2025-11-26 00:05:40.773 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 19:05:41 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:05:41 np0005535838 podman[276045]: 2025-11-26 00:05:41.263241867 +0000 UTC m=+0.079016725 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 19:05:41 np0005535838 podman[276044]: 2025-11-26 00:05:41.329007449 +0000 UTC m=+0.147559891 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 19:05:42 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1110: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:44 np0005535838 podman[276261]: 2025-11-26 00:05:44.080446975 +0000 UTC m=+0.113747711 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 19:05:44 np0005535838 podman[276261]: 2025-11-26 00:05:44.182592466 +0000 UTC m=+0.215893202 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 19:05:44 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1111: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:44 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 19:05:44 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:05:44 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 19:05:44 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:05:45 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 218c4abf-746e-4e80-9b35-337f9f6ea5cb does not exist
Nov 25 19:05:45 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 86d3d06a-dfee-4b00-a912-83987d7fb94c does not exist
Nov 25 19:05:45 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 5e23668f-6f52-4f36-815e-90a597caad86 does not exist
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:05:45 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 19:05:46 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:05:46 np0005535838 podman[276675]: 2025-11-26 00:05:46.302313166 +0000 UTC m=+0.038559498 container create 914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_buck, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 19:05:46 np0005535838 systemd[1]: Started libpod-conmon-914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2.scope.
Nov 25 19:05:46 np0005535838 systemd[1]: Started libcrun container.
Nov 25 19:05:46 np0005535838 podman[276675]: 2025-11-26 00:05:46.372358642 +0000 UTC m=+0.108604994 container init 914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 19:05:46 np0005535838 podman[276675]: 2025-11-26 00:05:46.379625626 +0000 UTC m=+0.115871968 container start 914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_buck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 19:05:46 np0005535838 podman[276675]: 2025-11-26 00:05:46.382610055 +0000 UTC m=+0.118856467 container attach 914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 19:05:46 np0005535838 podman[276675]: 2025-11-26 00:05:46.287836951 +0000 UTC m=+0.024083313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 19:05:46 np0005535838 admiring_buck[276691]: 167 167
Nov 25 19:05:46 np0005535838 systemd[1]: libpod-914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2.scope: Deactivated successfully.
Nov 25 19:05:46 np0005535838 podman[276675]: 2025-11-26 00:05:46.385997675 +0000 UTC m=+0.122244097 container died 914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_buck, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 19:05:46 np0005535838 systemd[1]: var-lib-containers-storage-overlay-d63d983e07867fdd91c53b84995387d610f0e007d8fc7643a894651feca70ab2-merged.mount: Deactivated successfully.
Nov 25 19:05:46 np0005535838 podman[276675]: 2025-11-26 00:05:46.434093096 +0000 UTC m=+0.170339448 container remove 914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 19:05:46 np0005535838 systemd[1]: libpod-conmon-914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2.scope: Deactivated successfully.
Nov 25 19:05:46 np0005535838 podman[276714]: 2025-11-26 00:05:46.617961414 +0000 UTC m=+0.054239186 container create e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 19:05:46 np0005535838 systemd[1]: Started libpod-conmon-e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b.scope.
Nov 25 19:05:46 np0005535838 systemd[1]: Started libcrun container.
Nov 25 19:05:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1abe5d00327bd6610b56b276c01860c7feb9a3ec9ceb74a046a1c10630afa7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 19:05:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1abe5d00327bd6610b56b276c01860c7feb9a3ec9ceb74a046a1c10630afa7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 19:05:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1abe5d00327bd6610b56b276c01860c7feb9a3ec9ceb74a046a1c10630afa7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 19:05:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1abe5d00327bd6610b56b276c01860c7feb9a3ec9ceb74a046a1c10630afa7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 19:05:46 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1abe5d00327bd6610b56b276c01860c7feb9a3ec9ceb74a046a1c10630afa7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 19:05:46 np0005535838 podman[276714]: 2025-11-26 00:05:46.599899713 +0000 UTC m=+0.036177505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 19:05:46 np0005535838 podman[276714]: 2025-11-26 00:05:46.702385832 +0000 UTC m=+0.138663664 container init e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 19:05:46 np0005535838 podman[276714]: 2025-11-26 00:05:46.708915317 +0000 UTC m=+0.145193079 container start e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 19:05:46 np0005535838 podman[276714]: 2025-11-26 00:05:46.712018149 +0000 UTC m=+0.148295981 container attach e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 19:05:46 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1112: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:47 np0005535838 focused_wiles[276730]: --> passed data devices: 0 physical, 3 LVM
Nov 25 19:05:47 np0005535838 focused_wiles[276730]: --> relative data size: 1.0
Nov 25 19:05:47 np0005535838 focused_wiles[276730]: --> All data devices are unavailable
Nov 25 19:05:47 np0005535838 systemd[1]: libpod-e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b.scope: Deactivated successfully.
Nov 25 19:05:47 np0005535838 podman[276714]: 2025-11-26 00:05:47.754526468 +0000 UTC m=+1.190804260 container died e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 19:05:47 np0005535838 systemd[1]: var-lib-containers-storage-overlay-df1abe5d00327bd6610b56b276c01860c7feb9a3ec9ceb74a046a1c10630afa7-merged.mount: Deactivated successfully.
Nov 25 19:05:47 np0005535838 podman[276714]: 2025-11-26 00:05:47.843418665 +0000 UTC m=+1.279696437 container remove e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 19:05:47 np0005535838 systemd[1]: libpod-conmon-e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b.scope: Deactivated successfully.
Nov 25 19:05:48 np0005535838 podman[276915]: 2025-11-26 00:05:48.498847592 +0000 UTC m=+0.055906990 container create 6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 19:05:48 np0005535838 systemd[1]: Started libpod-conmon-6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5.scope.
Nov 25 19:05:48 np0005535838 systemd[1]: Started libcrun container.
Nov 25 19:05:48 np0005535838 podman[276915]: 2025-11-26 00:05:48.479714033 +0000 UTC m=+0.036773461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 19:05:48 np0005535838 podman[276915]: 2025-11-26 00:05:48.584806953 +0000 UTC m=+0.141866351 container init 6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 19:05:48 np0005535838 podman[276915]: 2025-11-26 00:05:48.592401524 +0000 UTC m=+0.149460912 container start 6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 19:05:48 np0005535838 podman[276915]: 2025-11-26 00:05:48.595515667 +0000 UTC m=+0.152575155 container attach 6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 19:05:48 np0005535838 competent_shockley[276932]: 167 167
Nov 25 19:05:48 np0005535838 systemd[1]: libpod-6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5.scope: Deactivated successfully.
Nov 25 19:05:48 np0005535838 podman[276915]: 2025-11-26 00:05:48.596406031 +0000 UTC m=+0.153465409 container died 6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 19:05:48 np0005535838 systemd[1]: var-lib-containers-storage-overlay-52b81ee6979c9a119a888588e917c3bb7cdd46ea6bbd56718820bc26377effa5-merged.mount: Deactivated successfully.
Nov 25 19:05:48 np0005535838 podman[276915]: 2025-11-26 00:05:48.640474825 +0000 UTC m=+0.197534203 container remove 6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 19:05:48 np0005535838 systemd[1]: libpod-conmon-6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5.scope: Deactivated successfully.
Nov 25 19:05:48 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1113: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:48 np0005535838 podman[276957]: 2025-11-26 00:05:48.797254231 +0000 UTC m=+0.044330692 container create d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 19:05:48 np0005535838 systemd[1]: Started libpod-conmon-d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2.scope.
Nov 25 19:05:48 np0005535838 systemd[1]: Started libcrun container.
Nov 25 19:05:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ae7b6057bdfc3be9aba2b845b2bb2827bd3050fcbe4eb091d995a83c66a50e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 19:05:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ae7b6057bdfc3be9aba2b845b2bb2827bd3050fcbe4eb091d995a83c66a50e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 19:05:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ae7b6057bdfc3be9aba2b845b2bb2827bd3050fcbe4eb091d995a83c66a50e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 19:05:48 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ae7b6057bdfc3be9aba2b845b2bb2827bd3050fcbe4eb091d995a83c66a50e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 19:05:48 np0005535838 podman[276957]: 2025-11-26 00:05:48.868054567 +0000 UTC m=+0.115131048 container init d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 19:05:48 np0005535838 podman[276957]: 2025-11-26 00:05:48.778968144 +0000 UTC m=+0.026044615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 19:05:48 np0005535838 podman[276957]: 2025-11-26 00:05:48.876209334 +0000 UTC m=+0.123285785 container start d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 25 19:05:48 np0005535838 podman[276957]: 2025-11-26 00:05:48.880105497 +0000 UTC m=+0.127181968 container attach d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 19:05:49 np0005535838 pensive_moore[276974]: {
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:    "0": [
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:        {
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "devices": [
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "/dev/loop3"
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            ],
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_name": "ceph_lv0",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_size": "21470642176",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "name": "ceph_lv0",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "tags": {
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.cephx_lockbox_secret": "",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.cluster_name": "ceph",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.crush_device_class": "",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.encrypted": "0",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.osd_id": "0",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.type": "block",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.vdo": "0"
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            },
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "type": "block",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "vg_name": "ceph_vg0"
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:        }
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:    ],
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:    "1": [
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:        {
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "devices": [
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "/dev/loop4"
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            ],
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_name": "ceph_lv1",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_size": "21470642176",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "name": "ceph_lv1",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "tags": {
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.cephx_lockbox_secret": "",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.cluster_name": "ceph",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.crush_device_class": "",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.encrypted": "0",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.osd_id": "1",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.type": "block",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.vdo": "0"
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            },
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "type": "block",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "vg_name": "ceph_vg1"
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:        }
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:    ],
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:    "2": [
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:        {
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "devices": [
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "/dev/loop5"
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            ],
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_name": "ceph_lv2",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_size": "21470642176",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "name": "ceph_lv2",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "tags": {
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.cephx_lockbox_secret": "",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.cluster_name": "ceph",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.crush_device_class": "",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.encrypted": "0",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.osd_id": "2",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.type": "block",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:                "ceph.vdo": "0"
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            },
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "type": "block",
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:            "vg_name": "ceph_vg2"
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:        }
Nov 25 19:05:49 np0005535838 pensive_moore[276974]:    ]
Nov 25 19:05:49 np0005535838 pensive_moore[276974]: }
Nov 25 19:05:49 np0005535838 systemd[1]: libpod-d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2.scope: Deactivated successfully.
Nov 25 19:05:49 np0005535838 podman[276957]: 2025-11-26 00:05:49.621228858 +0000 UTC m=+0.868305319 container died d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 19:05:49 np0005535838 systemd[1]: var-lib-containers-storage-overlay-12ae7b6057bdfc3be9aba2b845b2bb2827bd3050fcbe4eb091d995a83c66a50e-merged.mount: Deactivated successfully.
Nov 25 19:05:49 np0005535838 podman[276957]: 2025-11-26 00:05:49.67874513 +0000 UTC m=+0.925821581 container remove d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 19:05:49 np0005535838 systemd[1]: libpod-conmon-d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2.scope: Deactivated successfully.
Nov 25 19:05:50 np0005535838 podman[277139]: 2025-11-26 00:05:50.409561426 +0000 UTC m=+0.052323855 container create bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 19:05:50 np0005535838 systemd[1]: Started libpod-conmon-bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c.scope.
Nov 25 19:05:50 np0005535838 podman[277139]: 2025-11-26 00:05:50.381778026 +0000 UTC m=+0.024540495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 19:05:50 np0005535838 systemd[1]: Started libcrun container.
Nov 25 19:05:50 np0005535838 podman[277139]: 2025-11-26 00:05:50.504975567 +0000 UTC m=+0.147738006 container init bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 19:05:50 np0005535838 podman[277139]: 2025-11-26 00:05:50.514139992 +0000 UTC m=+0.156902411 container start bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 19:05:50 np0005535838 podman[277139]: 2025-11-26 00:05:50.517548712 +0000 UTC m=+0.160311461 container attach bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 19:05:50 np0005535838 quizzical_wing[277155]: 167 167
Nov 25 19:05:50 np0005535838 systemd[1]: libpod-bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c.scope: Deactivated successfully.
Nov 25 19:05:50 np0005535838 conmon[277155]: conmon bd492a84160eb6fce406 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c.scope/container/memory.events
Nov 25 19:05:50 np0005535838 podman[277139]: 2025-11-26 00:05:50.520413798 +0000 UTC m=+0.163176217 container died bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 19:05:50 np0005535838 systemd[1]: var-lib-containers-storage-overlay-189c8081a7db1072b61c652bcc670f8f58c4820fad4fc6e121442a8581a63228-merged.mount: Deactivated successfully.
Nov 25 19:05:50 np0005535838 podman[277139]: 2025-11-26 00:05:50.552110163 +0000 UTC m=+0.194872582 container remove bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 19:05:50 np0005535838 systemd[1]: libpod-conmon-bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c.scope: Deactivated successfully.
Nov 25 19:05:50 np0005535838 podman[277180]: 2025-11-26 00:05:50.720023366 +0000 UTC m=+0.040691296 container create 667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 19:05:50 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1114: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:50 np0005535838 systemd[1]: Started libpod-conmon-667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6.scope.
Nov 25 19:05:50 np0005535838 systemd[1]: Started libcrun container.
Nov 25 19:05:50 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a2ec919807445fa504f4be29ac22d5dd52b9033ebb25d50b15652efcaeb0d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 19:05:50 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a2ec919807445fa504f4be29ac22d5dd52b9033ebb25d50b15652efcaeb0d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 19:05:50 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a2ec919807445fa504f4be29ac22d5dd52b9033ebb25d50b15652efcaeb0d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 19:05:50 np0005535838 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a2ec919807445fa504f4be29ac22d5dd52b9033ebb25d50b15652efcaeb0d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 19:05:50 np0005535838 podman[277180]: 2025-11-26 00:05:50.700343782 +0000 UTC m=+0.021011732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 19:05:50 np0005535838 podman[277180]: 2025-11-26 00:05:50.796216855 +0000 UTC m=+0.116884805 container init 667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dewdney, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 19:05:50 np0005535838 podman[277180]: 2025-11-26 00:05:50.801564167 +0000 UTC m=+0.122232097 container start 667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 19:05:50 np0005535838 podman[277180]: 2025-11-26 00:05:50.805247765 +0000 UTC m=+0.125915725 container attach 667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 19:05:51 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]: {
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:    "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "osd_id": 2,
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "type": "bluestore"
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:    },
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:    "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "osd_id": 1,
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "type": "bluestore"
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:    },
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:    "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "osd_id": 0,
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:        "type": "bluestore"
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]:    }
Nov 25 19:05:51 np0005535838 vigorous_dewdney[277196]: }
Nov 25 19:05:51 np0005535838 systemd[1]: libpod-667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6.scope: Deactivated successfully.
Nov 25 19:05:51 np0005535838 systemd[1]: libpod-667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6.scope: Consumed 1.050s CPU time.
Nov 25 19:05:51 np0005535838 podman[277180]: 2025-11-26 00:05:51.844987709 +0000 UTC m=+1.165655669 container died 667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dewdney, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 19:05:51 np0005535838 systemd[1]: var-lib-containers-storage-overlay-77a2ec919807445fa504f4be29ac22d5dd52b9033ebb25d50b15652efcaeb0d8-merged.mount: Deactivated successfully.
Nov 25 19:05:51 np0005535838 podman[277180]: 2025-11-26 00:05:51.959450888 +0000 UTC m=+1.280118828 container remove 667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dewdney, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 19:05:51 np0005535838 systemd[1]: libpod-conmon-667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6.scope: Deactivated successfully.
Nov 25 19:05:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 19:05:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:05:52 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 19:05:52 np0005535838 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:05:52 np0005535838 ceph-mgr[75954]: [progress WARNING root] complete: ev 8b2a3764-83b8-454e-a5de-8cd804140f22 does not exist
Nov 25 19:05:52 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:05:52 np0005535838 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 19:05:52 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1115: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:54 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1116: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:56 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-26_00:05:56
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] pools ['vms', 'backups', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'images', 'cephfs.cephfs.data']
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 19:05:56 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1117: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:05:58 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1118: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:00 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1119: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:01 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 19:06:01 np0005535838 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 19:06:02 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1120: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:03 np0005535838 podman[277292]: 2025-11-26 00:06:03.272353148 +0000 UTC m=+0.095531066 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 19:06:03 np0005535838 nova_compute[252550]: 2025-11-26 00:06:03.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:06:03 np0005535838 nova_compute[252550]: 2025-11-26 00:06:03.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 19:06:03 np0005535838 nova_compute[252550]: 2025-11-26 00:06:03.841 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 19:06:04 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1121: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:06 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:06:06 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1122: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:08 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1123: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:08 np0005535838 nova_compute[252550]: 2025-11-26 00:06:08.841 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:06:10 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1124: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:11 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:06:12 np0005535838 podman[277314]: 2025-11-26 00:06:12.25656516 +0000 UTC m=+0.069953404 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 19:06:12 np0005535838 podman[277313]: 2025-11-26 00:06:12.274131968 +0000 UTC m=+0.100722004 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Nov 25 19:06:12 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1125: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:13 np0005535838 nova_compute[252550]: 2025-11-26 00:06:13.817 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:06:13 np0005535838 nova_compute[252550]: 2025-11-26 00:06:13.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:06:13 np0005535838 nova_compute[252550]: 2025-11-26 00:06:13.854 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 19:06:13 np0005535838 nova_compute[252550]: 2025-11-26 00:06:13.855 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 19:06:13 np0005535838 nova_compute[252550]: 2025-11-26 00:06:13.855 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 19:06:13 np0005535838 nova_compute[252550]: 2025-11-26 00:06:13.856 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 19:06:13 np0005535838 nova_compute[252550]: 2025-11-26 00:06:13.856 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 19:06:14 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 19:06:14 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1783339804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 19:06:14 np0005535838 nova_compute[252550]: 2025-11-26 00:06:14.274 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 19:06:14 np0005535838 nova_compute[252550]: 2025-11-26 00:06:14.521 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 19:06:14 np0005535838 nova_compute[252550]: 2025-11-26 00:06:14.523 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5148MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 19:06:14 np0005535838 nova_compute[252550]: 2025-11-26 00:06:14.523 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 19:06:14 np0005535838 nova_compute[252550]: 2025-11-26 00:06:14.524 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 19:06:14 np0005535838 nova_compute[252550]: 2025-11-26 00:06:14.760 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 19:06:14 np0005535838 nova_compute[252550]: 2025-11-26 00:06:14.761 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 19:06:14 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1126: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:14 np0005535838 nova_compute[252550]: 2025-11-26 00:06:14.892 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing inventories for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 19:06:14 np0005535838 nova_compute[252550]: 2025-11-26 00:06:14.998 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Updating ProviderTree inventory for provider 08547965-b35f-4b7b-95d8-902f06aa011c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 19:06:14 np0005535838 nova_compute[252550]: 2025-11-26 00:06:14.998 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Updating inventory in ProviderTree for provider 08547965-b35f-4b7b-95d8-902f06aa011c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 19:06:15 np0005535838 nova_compute[252550]: 2025-11-26 00:06:15.010 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing aggregate associations for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 19:06:15 np0005535838 nova_compute[252550]: 2025-11-26 00:06:15.028 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing trait associations for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 19:06:15 np0005535838 nova_compute[252550]: 2025-11-26 00:06:15.055 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 19:06:15 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 19:06:15 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1528675005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 19:06:15 np0005535838 nova_compute[252550]: 2025-11-26 00:06:15.480 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 19:06:15 np0005535838 nova_compute[252550]: 2025-11-26 00:06:15.487 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 19:06:15 np0005535838 nova_compute[252550]: 2025-11-26 00:06:15.513 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 19:06:15 np0005535838 nova_compute[252550]: 2025-11-26 00:06:15.516 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 19:06:15 np0005535838 nova_compute[252550]: 2025-11-26 00:06:15.516 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 19:06:16 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:06:16 np0005535838 nova_compute[252550]: 2025-11-26 00:06:16.518 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:06:16 np0005535838 nova_compute[252550]: 2025-11-26 00:06:16.519 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:06:16 np0005535838 nova_compute[252550]: 2025-11-26 00:06:16.519 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 19:06:16 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1127: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 19:06:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/972289283' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 19:06:17 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 19:06:17 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/972289283' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 19:06:17 np0005535838 nova_compute[252550]: 2025-11-26 00:06:17.823 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:06:18 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1128: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:19 np0005535838 nova_compute[252550]: 2025-11-26 00:06:19.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:06:19 np0005535838 nova_compute[252550]: 2025-11-26 00:06:19.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 19:06:19 np0005535838 nova_compute[252550]: 2025-11-26 00:06:19.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 19:06:19 np0005535838 nova_compute[252550]: 2025-11-26 00:06:19.841 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 19:06:19 np0005535838 nova_compute[252550]: 2025-11-26 00:06:19.841 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:06:20 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1129: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:20 np0005535838 nova_compute[252550]: 2025-11-26 00:06:20.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:06:20 np0005535838 nova_compute[252550]: 2025-11-26 00:06:20.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:06:20 np0005535838 nova_compute[252550]: 2025-11-26 00:06:20.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 19:06:21 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:06:21 np0005535838 nova_compute[252550]: 2025-11-26 00:06:21.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 19:06:22 np0005535838 systemd-logind[789]: New session 55 of user zuul.
Nov 25 19:06:22 np0005535838 systemd[1]: Started Session 55 of User zuul.
Nov 25 19:06:22 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1130: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:24 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15015 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:24 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1131: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:25 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15017 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:25 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 19:06:25 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039388735' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 19:06:26 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:06:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 19:06:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 19:06:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 19:06:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 19:06:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 19:06:26 np0005535838 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 19:06:26 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1132: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:28 np0005535838 ovs-vsctl[277670]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 19:06:28 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1133: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:29 np0005535838 virtqemud[251995]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 19:06:29 np0005535838 virtqemud[251995]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 19:06:29 np0005535838 virtqemud[251995]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 19:06:29 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: cache status {prefix=cache status} (starting...)
Nov 25 19:06:29 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: client ls {prefix=client ls} (starting...)
Nov 25 19:06:29 np0005535838 lvm[278012]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 19:06:29 np0005535838 lvm[278012]: VG ceph_vg0 finished
Nov 25 19:06:29 np0005535838 lvm[278040]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 19:06:29 np0005535838 lvm[278040]: VG ceph_vg1 finished
Nov 25 19:06:29 np0005535838 lvm[278046]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 19:06:29 np0005535838 lvm[278046]: VG ceph_vg2 finished
Nov 25 19:06:30 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15021 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:30 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15023 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:30 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 19:06:30 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 19:06:30 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 19:06:30 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 19:06:30 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1134: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:30 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 25 19:06:30 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/39046050' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 19:06:30 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 19:06:31 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 19:06:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:06:31 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15029 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:31 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-26T00:06:31.097+0000 7f36737f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 19:06:31 np0005535838 ceph-mgr[75954]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 19:06:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 19:06:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654800089' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 19:06:31 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 19:06:31 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 19:06:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 25 19:06:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774492520' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 19:06:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 25 19:06:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2521965635' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 19:06:31 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: ops {prefix=ops} (starting...)
Nov 25 19:06:31 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 19:06:31 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2650124604' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 19:06:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 25 19:06:32 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4196617477' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 19:06:32 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: session ls {prefix=session ls} (starting...)
Nov 25 19:06:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 19:06:32 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3776063975' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 19:06:32 np0005535838 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: status {prefix=status} (starting...)
Nov 25 19:06:32 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15043 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:32 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 19:06:32 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4077956155' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 19:06:32 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1135: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:32 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15047 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:33 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 19:06:33 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1135112060' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 19:06:33 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 25 19:06:33 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/186323434' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 19:06:33 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 19:06:33 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/529213605' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 19:06:33 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 25 19:06:33 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/665988486' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 19:06:33 np0005535838 podman[278587]: 2025-11-26 00:06:33.873013269 +0000 UTC m=+0.063156293 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 19:06:34 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15059 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:34 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-26T00:06:34.204+0000 7f36737f5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 19:06:34 np0005535838 ceph-mgr[75954]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 19:06:34 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 19:06:34 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/723364685' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 19:06:34 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15063 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:34 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 25 19:06:34 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1553018103' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 19:06:34 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1136: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:34 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15065 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 25 19:06:35 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1014545830' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 19:06:35 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15069 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 19:06:35 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754443487' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 19:06:35 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15073 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59367424 unmapped: 344064 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59367424 unmapped: 344064 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59375616 unmapped: 335872 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59375616 unmapped: 335872 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59375616 unmapped: 335872 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 385089 data_alloc: 218103808 data_used: 32768
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59383808 unmapped: 327680 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59392000 unmapped: 319488 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59408384 unmapped: 303104 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59408384 unmapped: 303104 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59432960 unmapped: 278528 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 386236 data_alloc: 218103808 data_used: 32768
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59432960 unmapped: 278528 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.723413467s of 12.791498184s, submitted: 16
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59432960 unmapped: 278528 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59441152 unmapped: 270336 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59441152 unmapped: 270336 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59482112 unmapped: 229376 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 389943 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59482112 unmapped: 229376 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59490304 unmapped: 221184 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59490304 unmapped: 221184 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59490304 unmapped: 221184 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59498496 unmapped: 212992 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 389943 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59498496 unmapped: 212992 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59498496 unmapped: 212992 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59506688 unmapped: 204800 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59514880 unmapped: 196608 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.111580849s of 13.117458344s, submitted: 2
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59514880 unmapped: 196608 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 391091 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59523072 unmapped: 188416 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59523072 unmapped: 188416 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59531264 unmapped: 180224 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59531264 unmapped: 180224 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59539456 unmapped: 172032 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 393387 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59547648 unmapped: 163840 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59555840 unmapped: 155648 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59555840 unmapped: 155648 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59564032 unmapped: 147456 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59564032 unmapped: 147456 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 395683 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59572224 unmapped: 139264 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59572224 unmapped: 139264 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59572224 unmapped: 139264 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59580416 unmapped: 131072 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59588608 unmapped: 122880 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 395683 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59596800 unmapped: 114688 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59596800 unmapped: 114688 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.988166809s of 18.023300171s, submitted: 10
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59604992 unmapped: 106496 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59613184 unmapped: 98304 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59621376 unmapped: 90112 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 397979 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59629568 unmapped: 81920 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59629568 unmapped: 81920 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 400274 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59654144 unmapped: 57344 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59654144 unmapped: 57344 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59654144 unmapped: 57344 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.980714798s of 12.043251991s, submitted: 10
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59662336 unmapped: 49152 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 402569 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59670528 unmapped: 40960 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59678720 unmapped: 32768 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59686912 unmapped: 24576 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59686912 unmapped: 24576 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59686912 unmapped: 24576 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 406013 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59695104 unmapped: 16384 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59695104 unmapped: 16384 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59703296 unmapped: 8192 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59703296 unmapped: 8192 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59703296 unmapped: 8192 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 408309 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59711488 unmapped: 0 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59711488 unmapped: 0 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59711488 unmapped: 0 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59719680 unmapped: 1040384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59719680 unmapped: 1040384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 408309 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59727872 unmapped: 1032192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59727872 unmapped: 1032192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59727872 unmapped: 1032192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59736064 unmapped: 1024000 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.015459061s of 20.058856964s, submitted: 12
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59752448 unmapped: 1007616 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 409457 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59752448 unmapped: 1007616 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59760640 unmapped: 999424 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59760640 unmapped: 999424 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59768832 unmapped: 991232 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59768832 unmapped: 991232 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 411752 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59777024 unmapped: 983040 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59777024 unmapped: 983040 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59785216 unmapped: 974848 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.15 deep-scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.15 deep-scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59801600 unmapped: 958464 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59817984 unmapped: 942080 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 414047 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59826176 unmapped: 933888 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59826176 unmapped: 933888 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 414047 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.928712845s of 17.964544296s, submitted: 10
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416341 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418635 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59916288 unmapped: 843776 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59916288 unmapped: 843776 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 835584 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418635 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.752611160s of 12.872432709s, submitted: 8
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59940864 unmapped: 819200 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59940864 unmapped: 819200 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59949056 unmapped: 811008 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 420929 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59949056 unmapped: 811008 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59957248 unmapped: 802816 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59957248 unmapped: 802816 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59957248 unmapped: 802816 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 422077 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 794624 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 794624 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 794624 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.934057236s of 11.955801964s, submitted: 6
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59981824 unmapped: 778240 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59990016 unmapped: 770048 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 423224 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 426667 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60047360 unmapped: 712704 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 427814 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60055552 unmapped: 704512 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.913698196s of 12.949940681s, submitted: 10
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60071936 unmapped: 688128 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60080128 unmapped: 679936 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60088320 unmapped: 671744 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60088320 unmapped: 671744 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60112896 unmapped: 647168 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60112896 unmapped: 647168 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60112896 unmapped: 647168 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60121088 unmapped: 638976 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60121088 unmapped: 638976 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60129280 unmapped: 630784 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60129280 unmapped: 630784 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 622592 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 622592 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60145664 unmapped: 614400 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60145664 unmapped: 614400 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60145664 unmapped: 614400 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60153856 unmapped: 606208 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60153856 unmapped: 606208 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60153856 unmapped: 606208 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60170240 unmapped: 589824 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60170240 unmapped: 589824 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60178432 unmapped: 581632 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60178432 unmapped: 581632 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60178432 unmapped: 581632 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 573440 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 573440 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 573440 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60194816 unmapped: 565248 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60194816 unmapped: 565248 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60219392 unmapped: 540672 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60227584 unmapped: 532480 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60235776 unmapped: 524288 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60235776 unmapped: 524288 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60235776 unmapped: 524288 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60260352 unmapped: 499712 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60260352 unmapped: 499712 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60260352 unmapped: 499712 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60268544 unmapped: 491520 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60276736 unmapped: 483328 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60276736 unmapped: 483328 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60276736 unmapped: 483328 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60293120 unmapped: 466944 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60293120 unmapped: 466944 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60301312 unmapped: 458752 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60301312 unmapped: 458752 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60325888 unmapped: 434176 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60334080 unmapped: 425984 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60334080 unmapped: 425984 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60342272 unmapped: 417792 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60350464 unmapped: 409600 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60350464 unmapped: 409600 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60358656 unmapped: 401408 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60358656 unmapped: 401408 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60366848 unmapped: 393216 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60366848 unmapped: 393216 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60375040 unmapped: 385024 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60375040 unmapped: 385024 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60375040 unmapped: 385024 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60383232 unmapped: 376832 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60383232 unmapped: 376832 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60383232 unmapped: 376832 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60391424 unmapped: 368640 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60391424 unmapped: 368640 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60391424 unmapped: 368640 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60399616 unmapped: 360448 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60416000 unmapped: 344064 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60424192 unmapped: 335872 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60424192 unmapped: 335872 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60432384 unmapped: 327680 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60440576 unmapped: 319488 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60440576 unmapped: 319488 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60465152 unmapped: 294912 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60465152 unmapped: 294912 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60465152 unmapped: 294912 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60473344 unmapped: 286720 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60473344 unmapped: 286720 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60481536 unmapped: 278528 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60481536 unmapped: 278528 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60489728 unmapped: 270336 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60489728 unmapped: 270336 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60489728 unmapped: 270336 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60497920 unmapped: 262144 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60497920 unmapped: 262144 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60506112 unmapped: 253952 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60506112 unmapped: 253952 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60522496 unmapped: 237568 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60538880 unmapped: 221184 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60522496 unmapped: 237568 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60522496 unmapped: 237568 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60538880 unmapped: 221184 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60538880 unmapped: 221184 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60547072 unmapped: 212992 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60547072 unmapped: 212992 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60547072 unmapped: 212992 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60563456 unmapped: 196608 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60563456 unmapped: 196608 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60563456 unmapped: 196608 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60571648 unmapped: 188416 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60571648 unmapped: 188416 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60588032 unmapped: 172032 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60596224 unmapped: 163840 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60596224 unmapped: 163840 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60604416 unmapped: 155648 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60604416 unmapped: 155648 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60612608 unmapped: 147456 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60620800 unmapped: 139264 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60620800 unmapped: 139264 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60628992 unmapped: 131072 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60628992 unmapped: 131072 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60628992 unmapped: 131072 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 122880 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 122880 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 114688 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 114688 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 106496 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60661760 unmapped: 98304 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60661760 unmapped: 98304 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60661760 unmapped: 98304 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60669952 unmapped: 90112 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60669952 unmapped: 90112 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 81920 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 81920 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 81920 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 73728 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 73728 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 73728 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 65536 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 65536 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 57344 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 57344 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 57344 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 49152 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 49152 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 40960 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 40960 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 40960 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 32768 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 32768 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 24576 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 24576 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60743680 unmapped: 16384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60743680 unmapped: 16384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60743680 unmapped: 16384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60751872 unmapped: 8192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60751872 unmapped: 8192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60751872 unmapped: 8192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 0 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 0 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60768256 unmapped: 1040384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60768256 unmapped: 1040384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60768256 unmapped: 1040384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60776448 unmapped: 1032192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60776448 unmapped: 1032192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60776448 unmapped: 1032192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60784640 unmapped: 1024000 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60784640 unmapped: 1024000 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60792832 unmapped: 1015808 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60792832 unmapped: 1015808 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60792832 unmapped: 1015808 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60801024 unmapped: 1007616 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60801024 unmapped: 1007616 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60809216 unmapped: 999424 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60809216 unmapped: 999424 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60817408 unmapped: 991232 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60817408 unmapped: 991232 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60817408 unmapped: 991232 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60825600 unmapped: 983040 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60825600 unmapped: 983040 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60825600 unmapped: 983040 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60833792 unmapped: 974848 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60833792 unmapped: 974848 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60841984 unmapped: 966656 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60841984 unmapped: 966656 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60841984 unmapped: 966656 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60850176 unmapped: 958464 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60850176 unmapped: 958464 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60858368 unmapped: 950272 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60858368 unmapped: 950272 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60858368 unmapped: 950272 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60866560 unmapped: 942080 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60866560 unmapped: 942080 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 933888 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 933888 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 933888 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60882944 unmapped: 925696 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60882944 unmapped: 925696 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60899328 unmapped: 909312 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60899328 unmapped: 909312 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60915712 unmapped: 892928 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60915712 unmapped: 892928 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60923904 unmapped: 884736 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60923904 unmapped: 884736 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60923904 unmapped: 884736 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60932096 unmapped: 876544 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60932096 unmapped: 876544 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60940288 unmapped: 868352 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60948480 unmapped: 860160 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60948480 unmapped: 860160 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60964864 unmapped: 843776 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60964864 unmapped: 843776 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60964864 unmapped: 843776 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60973056 unmapped: 835584 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60973056 unmapped: 835584 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 827392 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 827392 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 827392 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60989440 unmapped: 819200 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60989440 unmapped: 819200 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60989440 unmapped: 819200 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60997632 unmapped: 811008 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60997632 unmapped: 811008 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61005824 unmapped: 802816 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61005824 unmapped: 802816 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61022208 unmapped: 786432 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61022208 unmapped: 786432 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61022208 unmapped: 786432 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61030400 unmapped: 778240 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61030400 unmapped: 778240 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61038592 unmapped: 770048 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61038592 unmapped: 770048 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61038592 unmapped: 770048 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61046784 unmapped: 761856 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61046784 unmapped: 761856 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61054976 unmapped: 753664 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61054976 unmapped: 753664 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61054976 unmapped: 753664 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61063168 unmapped: 745472 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61063168 unmapped: 745472 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61063168 unmapped: 745472 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61071360 unmapped: 737280 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61071360 unmapped: 737280 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61071360 unmapped: 737280 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61079552 unmapped: 729088 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61079552 unmapped: 729088 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61087744 unmapped: 720896 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61087744 unmapped: 720896 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61087744 unmapped: 720896 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61095936 unmapped: 712704 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61095936 unmapped: 712704 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 704512 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 704512 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 704512 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61112320 unmapped: 696320 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61112320 unmapped: 696320 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61120512 unmapped: 688128 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61120512 unmapped: 688128 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61120512 unmapped: 688128 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 655360 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 655360 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 655360 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 565248 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 565248 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 565248 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61251584 unmapped: 557056 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61251584 unmapped: 557056 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61259776 unmapped: 548864 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61259776 unmapped: 548864 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61259776 unmapped: 548864 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61267968 unmapped: 540672 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61267968 unmapped: 540672 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 532480 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 532480 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 532480 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61284352 unmapped: 524288 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61284352 unmapped: 524288 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61292544 unmapped: 516096 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61292544 unmapped: 516096 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61292544 unmapped: 516096 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61300736 unmapped: 507904 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61300736 unmapped: 507904 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61317120 unmapped: 491520 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61317120 unmapped: 491520 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 483328 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 483328 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 483328 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61333504 unmapped: 475136 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61333504 unmapped: 475136 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 466944 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 466944 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61349888 unmapped: 458752 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61349888 unmapped: 458752 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61349888 unmapped: 458752 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61358080 unmapped: 450560 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61366272 unmapped: 442368 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61366272 unmapped: 442368 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61374464 unmapped: 434176 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61374464 unmapped: 434176 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61382656 unmapped: 425984 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61390848 unmapped: 417792 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61390848 unmapped: 417792 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61399040 unmapped: 409600 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61399040 unmapped: 409600 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61423616 unmapped: 385024 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 16.16 MB, 0.03 MB/s#012Interval WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61480960 unmapped: 327680 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61489152 unmapped: 319488 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61489152 unmapped: 319488 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61497344 unmapped: 311296 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61497344 unmapped: 311296 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61497344 unmapped: 311296 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61505536 unmapped: 303104 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61505536 unmapped: 303104 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61521920 unmapped: 286720 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61521920 unmapped: 286720 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61521920 unmapped: 286720 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61530112 unmapped: 278528 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61530112 unmapped: 278528 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61546496 unmapped: 262144 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61546496 unmapped: 262144 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61554688 unmapped: 253952 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61554688 unmapped: 253952 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61562880 unmapped: 245760 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61571072 unmapped: 237568 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61571072 unmapped: 237568 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61587456 unmapped: 221184 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61587456 unmapped: 221184 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61595648 unmapped: 212992 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61595648 unmapped: 212992 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61603840 unmapped: 204800 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61603840 unmapped: 204800 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61603840 unmapped: 204800 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61612032 unmapped: 196608 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61612032 unmapped: 196608 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61620224 unmapped: 188416 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61620224 unmapped: 188416 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61620224 unmapped: 188416 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 180224 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 180224 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61636608 unmapped: 172032 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61636608 unmapped: 172032 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61636608 unmapped: 172032 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 163840 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61652992 unmapped: 155648 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 147456 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 147456 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 147456 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61677568 unmapped: 131072 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61677568 unmapped: 131072 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 122880 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 122880 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 122880 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 114688 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 114688 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 114688 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 106496 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 106496 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 106496 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 98304 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 98304 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 81920 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 81920 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 81920 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 73728 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 73728 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 65536 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 65536 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 65536 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 57344 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 57344 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61759488 unmapped: 49152 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61759488 unmapped: 49152 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 40960 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 40960 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 40960 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 32768 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 32768 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 32768 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 24576 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 24576 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 8192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 8192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 8192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 0 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 0 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 1040384 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 1040384 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 1040384 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61825024 unmapped: 1032192 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61825024 unmapped: 1032192 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61833216 unmapped: 1024000 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61833216 unmapped: 1024000 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 1015808 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 1015808 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 1015808 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 1007616 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 1007616 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: monclient: no keepalive since 2025-11-25T23:46:45.136110+0000 (2106-02-07T06:28:15.999867+0000 seconds), reconnecting
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: monclient: found mon.compute-0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: set_mon_vals no callback set
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: mgrc handle_mgr_map Got map version 9
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: mgrc ms_handle_reset ms_handle_reset con 0x56223dd09c00
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/855624559
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: mgrc handle_mgr_configure stats_period=5
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, 
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 63 handle_osd_map epochs [64,65], i have 63, src has [1,65]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1071.016235352s of 1071.030395508s, submitted: 4
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 65 heartbeat osd_stat(store_statfs(0x4fe117000/0x0/0x4ffc00000, data 0x4afe6/0xb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 17514496 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 66 ms_handle_reset con 0x56223ea28c00 session 0x56223f2c0b40
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 17457152 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 502193 data_alloc: 218103808 data_used: 114688
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 17448960 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 17448960 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 67 ms_handle_reset con 0x56223e823c00 session 0x56223eae34a0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd4a3000/0x0/0x4ffc00000, data 0xcbc602/0xd2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 539786 data_alloc: 218103808 data_used: 122880
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd49e000/0x0/0x4ffc00000, data 0xcbdbfb/0xd2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.124011040s of 11.322376251s, submitted: 43
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.678512573s of 28.689655304s, submitted: 13
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 69 ms_handle_reset con 0x56223e822800 session 0x56223eae2d20
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 17227776 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 17227776 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 69 heartbeat osd_stat(store_statfs(0x4fd496000/0x0/0x4ffc00000, data 0xcc0a78/0xd37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 17219584 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 604393 data_alloc: 218103808 data_used: 131072
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 25485312 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 69 heartbeat osd_stat(store_statfs(0x4fbc96000/0x0/0x4ffc00000, data 0x24c0a88/0x2538000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 25436160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 69 ms_handle_reset con 0x56223e822c00 session 0x56223f2205a0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 25436160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 25419776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 71 ms_handle_reset con 0x56223e823000 session 0x56223f221a40
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 24428544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 564280 data_alloc: 218103808 data_used: 139264
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 24387584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fd48f000/0x0/0x4ffc00000, data 0xcc3640/0xd3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fd48f000/0x0/0x4ffc00000, data 0xcc3640/0xd3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 72 ms_handle_reset con 0x56223e822800 session 0x56223f707e00
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 24199168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 24199168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.976924896s of 11.311155319s, submitted: 70
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 73 ms_handle_reset con 0x56223e822c00 session 0x56223f5f1e00
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 24100864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 73 ms_handle_reset con 0x56223e823c00 session 0x56223f5f14a0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 23977984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 574353 data_alloc: 218103808 data_used: 139264
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 23920640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223ea28c00 session 0x56223f5f0960
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223e823400 session 0x56223f742f00
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 23904256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223e823400 session 0x56223f5bf4a0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223e822800 session 0x56223f5be960
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fd481000/0x0/0x4ffc00000, data 0xcc8cfc/0xd4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 23904256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 76 ms_handle_reset con 0x56223e822c00 session 0x56223f707a40
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 76 ms_handle_reset con 0x56223e823c00 session 0x56223f2c0780
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 23764992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 77 heartbeat osd_stat(store_statfs(0x4fd480000/0x0/0x4ffc00000, data 0xcca2d4/0xd4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [0,0,1,1])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 77 ms_handle_reset con 0x56223f254c00 session 0x56223f59f860
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 23625728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 77 ms_handle_reset con 0x56223ea28c00 session 0x56223f2e7860
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 588235 data_alloc: 218103808 data_used: 147456
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 23592960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 23592960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 23584768 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.005178452s of 10.492918968s, submitted: 152
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 79 ms_handle_reset con 0x56223e822c00 session 0x56223f2205a0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 79 ms_handle_reset con 0x56223e822800 session 0x56223ea89860
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 23478272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 79 ms_handle_reset con 0x56223e822400 session 0x56223ea89680
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fd479000/0x0/0x4ffc00000, data 0xcce187/0xd54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 23478272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 594951 data_alloc: 218103808 data_used: 147456
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 23478272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 23461888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 23461888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 23453696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fd476000/0x0/0x4ffc00000, data 0xccf683/0xd57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 80 ms_handle_reset con 0x56223e823c00 session 0x56223ea88f00
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 23453696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 597747 data_alloc: 218103808 data_used: 147456
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 82 ms_handle_reset con 0x56223e822400 session 0x56223f5f0960
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 23298048 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 83 ms_handle_reset con 0x56223e823c00 session 0x56223eae3860
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 23281664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 84 ms_handle_reset con 0x56223e822c00 session 0x56223eae2d20
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 22175744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.822913170s of 10.012957573s, submitted: 69
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 85 ms_handle_reset con 0x56223e822800 session 0x56223eae2b40
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 22110208 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 86 ms_handle_reset con 0x56223ea28c00 session 0x56223f2c1a40
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 86 heartbeat osd_stat(store_statfs(0x4fd45f000/0x0/0x4ffc00000, data 0xcd6c4b/0xd6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 634272 data_alloc: 218103808 data_used: 200704
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 87 heartbeat osd_stat(store_statfs(0x4fd459000/0x0/0x4ffc00000, data 0xcd996d/0xd74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 22036480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 87 ms_handle_reset con 0x56223e822800 session 0x56223f2210e0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 88 ms_handle_reset con 0x56223e822400 session 0x56223f2c10e0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 20619264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 88 ms_handle_reset con 0x56223e822c00 session 0x56223f220000
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 641350 data_alloc: 218103808 data_used: 200704
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 20455424 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 89 ms_handle_reset con 0x56223e86dc00 session 0x56223f220f00
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fbe7f000/0x0/0x4ffc00000, data 0xd01d2a/0xd9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 20348928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 90 ms_handle_reset con 0x562240975000 session 0x56223f707c20
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 20234240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 91 ms_handle_reset con 0x562240975000 session 0x56223f5bef00
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.528375626s of 10.122215271s, submitted: 156
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 92 ms_handle_reset con 0x56223e822c00 session 0x56223ea01680
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 92 ms_handle_reset con 0x56223e822400 session 0x56223f11bc20
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 68911104 unmapped: 19128320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 92 ms_handle_reset con 0x56223e822800 session 0x56223e9d41e0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbe79000/0x0/0x4ffc00000, data 0xd04238/0xda0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 18169856 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 93 ms_handle_reset con 0x56223e86dc00 session 0x56223ea010e0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 652558 data_alloc: 218103808 data_used: 217088
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 93 ms_handle_reset con 0x56223e822400 session 0x56223ea005a0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 93 ms_handle_reset con 0x56223e822800 session 0x56223f2e8960
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 18112512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 18079744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fbe76000/0x0/0x4ffc00000, data 0xd06d56/0xda6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 18022400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 95 ms_handle_reset con 0x56223e822c00 session 0x56223f2e92c0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975000 session 0x56223f2e8b40
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 17973248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 664123 data_alloc: 218103808 data_used: 229376
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fbe70000/0x0/0x4ffc00000, data 0xd099ba/0xdac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 17956864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 17956864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975400 session 0x56223e9d4000
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 17956864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x56223e822400 session 0x56223ea00000
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975400 session 0x56223f707e00
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x56223e822800 session 0x56223f2e74a0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975000 session 0x56223dc9da40
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975c00 session 0x56223f220780
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975c00 session 0x56223f743e00
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x56223e822400 session 0x56223ea005a0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 17948672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fbe70000/0x0/0x4ffc00000, data 0xd099ba/0xdac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.487519264s of 10.742533684s, submitted: 87
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x56223e822800 session 0x56223ea010e0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x562240975000 session 0x56223f2e8b40
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x562240975400 session 0x56223f2e92c0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 668581 data_alloc: 218103808 data_used: 237568
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x562240975000 session 0x56223f2e8000
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fbe6d000/0x0/0x4ffc00000, data 0xd0aea2/0xdb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 17989632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 668581 data_alloc: 218103808 data_used: 237568
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x562240975c00 session 0x56223f2c03c0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x562240654c00 session 0x56223f2f8000
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x56224066d000 session 0x56223f706d20
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x56223e822c00 session 0x56223f743a40
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 17915904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 98 heartbeat osd_stat(store_statfs(0x4fbe6a000/0x0/0x4ffc00000, data 0xd0c45c/0xdb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 98 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 99 ms_handle_reset con 0x562240654c00 session 0x56223f5be960
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 17907712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 17907712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 99 heartbeat osd_stat(store_statfs(0x4fbe65000/0x0/0x4ffc00000, data 0xd0de6a/0xdb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 99 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 17907712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 100 ms_handle_reset con 0x56224066cc00 session 0x56223f5f0f00
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 17850368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679111 data_alloc: 218103808 data_used: 241664
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.938570976s of 11.146072388s, submitted: 55
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 101 ms_handle_reset con 0x56224066c800 session 0x56223f220780
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 17850368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 17850368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 101 ms_handle_reset con 0x56223e822400 session 0x56223f59e5a0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 101 ms_handle_reset con 0x56223e822800 session 0x56223ea01c20
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fbe61000/0x0/0x4ffc00000, data 0xd1066a/0xdbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 17825792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 102 ms_handle_reset con 0x56223e822400 session 0x56223f2205a0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683786 data_alloc: 218103808 data_used: 245760
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fbe5f000/0x0/0x4ffc00000, data 0xd11c84/0xdbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 103 ms_handle_reset con 0x56223e823c00 session 0x56223f5f14a0
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 103 ms_handle_reset con 0x56223f255000 session 0x56223f2f9680
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 17768448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 103 ms_handle_reset con 0x562240654c00 session 0x56223ea01a40
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fbe5c000/0x0/0x4ffc00000, data 0xd13140/0xdc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 17768448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683002 data_alloc: 218103808 data_used: 241664
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 104 ms_handle_reset con 0x56224066c800 session 0x56223eae2780
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.860716820s of 10.143070221s, submitted: 96
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 17752064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 105 ms_handle_reset con 0x56224066d000 session 0x56223f2e9c20
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0xcf1bde/0xda2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 689942 data_alloc: 218103808 data_used: 245760
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0xcf1bde/0xda2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0xcf1bde/0xda2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 689942 data_alloc: 218103808 data_used: 245760
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 17727488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.980017662s of 12.055953979s, submitted: 14
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 692244 data_alloc: 218103808 data_used: 245760
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 17489920 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'config diff' '{prefix=config diff}'
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'config show' '{prefix=config show}'
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 17170432 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 16916480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 17014784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'log dump' '{prefix=log dump}'
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'perf dump' '{prefix=perf dump}'
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'perf schema' '{prefix=perf schema}'
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 16859136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 16859136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16842752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16842752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16842752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16842752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16842752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16842752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16842752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 16793600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 16793600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 16793600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 16793600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 16793600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5481 writes, 23K keys, 5481 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5481 writes, 906 syncs, 6.05 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1309 writes, 3946 keys, 1309 commit groups, 1.0 writes per commit group, ingest: 2.20 MB, 0.00 MB/s#012Interval WAL: 1309 writes, 541 syncs, 2.42 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 16785408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 16785408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 16785408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 16785408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 16785408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: mgrc ms_handle_reset ms_handle_reset con 0x56223f255800
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/855624559
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: mgrc handle_mgr_configure stats_period=5
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 19:06:35 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2748698357' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'config diff' '{prefix=config diff}'
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'config show' '{prefix=config show}'
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 16211968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 16269312 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:35 np0005535838 ceph-osd[91111]: do_command 'log dump' '{prefix=log dump}'
Nov 25 19:06:36 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15077 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 19:06:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 19:06:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1125873004' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 19:06:36 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15081 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:36 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1137: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:36 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 19:06:36 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2257827287' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 19:06:36 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15085 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 19:06:37 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15089 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 19:06:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 19:06:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3399618046' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 19:06:37 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15091 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 19:06:37 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 25 19:06:37 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623171305' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 19:06:38 np0005535838 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15099 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 19:06:38 np0005535838 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-26T00:06:38.292+0000 7f36737f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 19:06:38 np0005535838 ceph-mgr[75954]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 19:06:38 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 25 19:06:38 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1216966318' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 19:06:38 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 25 19:06:38 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2332673555' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 19:06:38 np0005535838 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1138: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 19:06:38 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 25 19:06:38 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2702273331' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 19:06:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 25 19:06:39 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439148967' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 19:06:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 25 19:06:39 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/311575299' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 19:06:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 25 19:06:39 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4179959095' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 19:06:39 np0005535838 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 25 19:06:39 np0005535838 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591011433' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 999424 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 999424 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.960755348s of 13.993970871s, submitted: 10
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 983040 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432852 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 974848 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 966656 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 966656 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 966656 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 958464 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432852 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61825024 unmapped: 950272 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 933888 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 925696 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 925696 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 917504 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 433999 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 917504 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 917504 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.131777763s of 13.148053169s, submitted: 4
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 909312 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 909312 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61874176 unmapped: 901120 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 435146 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61874176 unmapped: 901120 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 892928 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 892928 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 884736 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 860160 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 436293 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 851968 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 851968 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.016060829s of 10.034604073s, submitted: 6
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 843776 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 843776 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 835584 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 439734 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 827392 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 25 19:06:39 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 811008 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 811008 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 811008 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61972480 unmapped: 802816 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 440882 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61972480 unmapped: 802816 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 794624 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 794624 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.789491653s of 10.819688797s, submitted: 6
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 786432 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 786432 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 442029 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 786432 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 778240 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 778240 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 778240 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 761856 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 444324 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 761856 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 737280 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 737280 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 729088 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 729088 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 446620 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.852318764s of 11.895004272s, submitted: 10
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 712704 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 704512 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 704512 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 696320 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 679936 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 448916 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 671744 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 671744 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62119936 unmapped: 655360 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62119936 unmapped: 655360 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62136320 unmapped: 638976 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 452360 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 630784 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 630784 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 630784 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62152704 unmapped: 622592 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 614400 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 453508 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 606208 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.911233902s of 15.955444336s, submitted: 12
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 589824 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 589824 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 581632 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 581632 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454655 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 573440 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 573440 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 565248 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 557056 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 557056 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454655 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 548864 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 548864 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.033292770s of 11.040759087s, submitted: 2
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 548864 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 540672 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 540672 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 455802 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 516096 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 516096 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62267392 unmapped: 507904 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62267392 unmapped: 507904 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 499712 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458096 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 491520 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 491520 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 483328 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 475136 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.077341080s of 12.105804443s, submitted: 8
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 458752 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 460390 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 450560 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 450560 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 442368 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 442368 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 460390 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 442368 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.16 deep-scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.16 deep-scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 425984 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 425984 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 417792 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 417792 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462685 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 409600 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 409600 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 409600 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 401408 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 401408 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462685 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 393216 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 393216 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 393216 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 385024 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 385024 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462685 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62398464 unmapped: 376832 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 360448 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 360448 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.019342422s of 23.038656235s, submitted: 6
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 344064 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 344064 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464979 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 327680 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 327680 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 327680 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 319488 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 319488 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464979 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 311296 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 311296 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 311296 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 303104 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 303104 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.775456429s of 12.810349464s, submitted: 6
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 467273 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 294912 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1a deep-scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1a deep-scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62488576 unmapped: 286720 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 270336 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 262144 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62529536 unmapped: 245760 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 470716 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 237568 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 237568 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 229376 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 221184 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 221184 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 470716 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 204800 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 204800 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 204800 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 196608 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 196608 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.953562737s of 14.978596687s, submitted: 8
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 471864 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 196608 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 188416 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 188416 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 180224 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 180224 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473011 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 163840 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 155648 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 155648 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 147456 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 147456 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473011 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 131072 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 131072 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 131072 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.053456306s of 13.070212364s, submitted: 4
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 114688 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 114688 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 474158 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 114688 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 106496 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 106496 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 90112 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 90112 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476452 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 73728 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 73728 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 73728 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 65536 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.003563881s of 11.035881042s, submitted: 6
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 65536 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 477599 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 57344 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 40960 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 32768 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 24576 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 24576 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 16384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 16384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 16384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 8192 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 8192 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 0 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 0 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1040384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1040384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1040384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1007616 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1007616 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 999424 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 999424 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 999424 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 991232 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 991232 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 983040 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 974848 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 974848 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 966656 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 966656 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 966656 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 958464 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 958464 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 950272 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 950272 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 950272 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 942080 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 942080 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 933888 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 933888 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 933888 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 925696 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 925696 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 925696 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 917504 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 909312 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 909312 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 909312 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 901120 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 901120 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 892928 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 892928 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 892928 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 884736 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 884736 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 876544 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 876544 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 876544 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 868352 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 868352 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 851968 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 851968 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 851968 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 843776 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 843776 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 835584 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 835584 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 827392 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 827392 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 827392 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 819200 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 819200 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 811008 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 811008 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 811008 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 802816 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 802816 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 786432 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 786432 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 786432 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 778240 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 778240 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 778240 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 770048 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 770048 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 761856 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 761856 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 761856 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 753664 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 753664 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 753664 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 745472 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 745472 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 737280 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 737280 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 729088 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 729088 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 729088 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 720896 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 720896 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 704512 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 704512 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 704512 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 696320 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 696320 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 688128 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 688128 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 688128 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 679936 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 679936 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 679936 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 671744 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 671744 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 663552 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 663552 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 663552 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 655360 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 655360 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 647168 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 647168 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 647168 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 638976 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 638976 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 638976 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 630784 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 630784 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 622592 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 622592 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 622592 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 614400 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 614400 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 614400 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 606208 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 606208 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 598016 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 598016 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 598016 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 589824 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 589824 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 589824 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 581632 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 581632 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 565248 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 565248 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 557056 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 557056 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 557056 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 548864 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 548864 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 540672 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 540672 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 540672 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 532480 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 532480 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 524288 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 524288 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 524288 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 516096 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 516096 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 516096 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 507904 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 507904 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 499712 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 499712 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 491520 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 483328 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 483328 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 483328 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 475136 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 475136 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 475136 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 466944 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 466944 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 458752 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 458752 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 450560 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 450560 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 450560 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 442368 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 434176 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 434176 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 425984 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 425984 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 417792 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 417792 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 417792 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 409600 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 409600 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 401408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 401408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 401408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 393216 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 393216 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 385024 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 385024 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 385024 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 376832 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 376832 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 376832 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 368640 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 368640 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 360448 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 360448 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 360448 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 352256 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 352256 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 352256 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 344064 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 344064 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 344064 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 335872 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 335872 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 327680 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 327680 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 327680 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 319488 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 319488 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 311296 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 311296 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 311296 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 303104 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 303104 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 294912 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 294912 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 294912 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 286720 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 286720 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 286720 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 278528 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 278528 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 278528 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 270336 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 270336 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 262144 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 262144 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 262144 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 253952 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 253952 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 253952 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 245760 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 245760 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 237568 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 237568 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 237568 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 229376 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 229376 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 221184 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 221184 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 212992 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 212992 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 212992 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 196608 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 196608 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 188416 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 188416 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 188416 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 180224 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 180224 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 172032 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 172032 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 172032 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 163840 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 163840 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 155648 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 155648 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 147456 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 139264 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 139264 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 139264 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 131072 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 131072 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 131072 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 122880 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 122880 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 114688 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 114688 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 106496 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 106496 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 106496 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 16.54 MB, 0.03 MB/s#012Interval WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 16384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 8192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 8192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 0 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 0 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1040384 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1040384 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1040384 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 1024000 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 1024000 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 1024000 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1007616 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1007616 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 999424 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 999424 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 999424 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 991232 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 991232 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 983040 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 983040 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 983040 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 974848 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 974848 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 974848 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 966656 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 966656 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 958464 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 958464 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 958464 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 950272 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 950272 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 942080 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 942080 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 942080 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 933888 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 933888 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 925696 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 925696 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 925696 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 917504 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 917504 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 917504 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 909312 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 909312 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 901120 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 901120 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 901120 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 892928 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 892928 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 892928 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 884736 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 884736 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 876544 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 876544 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 876544 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 868352 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 868352 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 860160 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 860160 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 860160 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 851968 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 851968 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 851968 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 843776 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 843776 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 835584 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 835584 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 835584 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 827392 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 827392 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 827392 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 819200 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 819200 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 802816 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 802816 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 802816 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 794624 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 794624 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 786432 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 786432 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 786432 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 778240 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 778240 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 778240 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 770048 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 770048 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 770048 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 761856 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 761856 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 745472 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 745472 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 737280 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 737280 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 729088 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 729088 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 720896 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 720896 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 720896 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 712704 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 712704 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: mgrc ms_handle_reset ms_handle_reset con 0x5613ea45dc00
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/855624559
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: mgrc handle_mgr_configure stats_period=5
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 ms_handle_reset con 0x5613eb896400 session 0x5613eb1c5860
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 ms_handle_reset con 0x5613eb897000 session 0x5613ebc8a5a0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown,
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 1016.414916992s of 1016.440307617s, submitted: 6
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 573440 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 499712 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 65 heartbeat osd_stat(store_statfs(0x4fe0a4000/0x0/0x4ffc00000, data 0xbb5dc/0x128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 17219584 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 66 ms_handle_reset con 0x5613ece8c800 session 0x5613eb982b40
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 17137664 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 547922 data_alloc: 218103808 data_used: 24576
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 17137664 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 17129472 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 16949248 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 67 ms_handle_reset con 0x5613ed314000 session 0x5613ebd4de00
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fd09f000/0x0/0x4ffc00000, data 0x10be188/0x112e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 16900096 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 16900096 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 40.834548950s of 41.041542053s, submitted: 53
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 69 ms_handle_reset con 0x5613ed314400 session 0x5613eb995e00
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 16883712 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fd099000/0x0/0x4ffc00000, data 0x10c0be2/0x1134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 16883712 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 16883712 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 615480 data_alloc: 218103808 data_used: 45056
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fc89a000/0x0/0x4ffc00000, data 0x18c0be2/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 16785408 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 24993792 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fb09a000/0x0/0x4ffc00000, data 0x30c0be2/0x3134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 24993792 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 70 ms_handle_reset con 0x5613ed315c00 session 0x5613ebe07e00
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 24961024 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 24961024 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 71 ms_handle_reset con 0x5613ed315800 session 0x5613eb982b40
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 629376 data_alloc: 218103808 data_used: 53248
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 24928256 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 71 heartbeat osd_stat(store_statfs(0x4fd093000/0x0/0x4ffc00000, data 0x10c37aa/0x113a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 23871488 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 72 ms_handle_reset con 0x5613ece8c800 session 0x5613eb9823c0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 23797760 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.184603691s of 11.587786674s, submitted: 73
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 23781376 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 73 ms_handle_reset con 0x5613ed314000 session 0x5613ecd212c0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 73 ms_handle_reset con 0x5613ed314400 session 0x5613ec2ad2c0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 23781376 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 73 ms_handle_reset con 0x5613ed315c00 session 0x5613eb982000
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 636864 data_alloc: 218103808 data_used: 61440
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 23748608 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 74 heartbeat osd_stat(store_statfs(0x4fd084000/0x0/0x4ffc00000, data 0x10c80af/0x1149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 23658496 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 74 handle_osd_map epochs [75,75], i have 75, src has [1,75]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 75 ms_handle_reset con 0x5613ed315400 session 0x5613eb982b40
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 75 ms_handle_reset con 0x5613edb41400 session 0x5613ecd463c0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 75 ms_handle_reset con 0x5613ece8c800 session 0x5613ec2ad4a0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 75 ms_handle_reset con 0x5613ed315c00 session 0x5613ec2e1e00
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 23584768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 23560192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 76 ms_handle_reset con 0x5613ee378c00 session 0x5613eaf134a0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 76 ms_handle_reset con 0x5613ee378000 session 0x5613ea3f1e00
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 22372352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 77 ms_handle_reset con 0x5613ed315c00 session 0x5613eaf894a0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 662651 data_alloc: 218103808 data_used: 69632
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 77 ms_handle_reset con 0x5613ece8c800 session 0x5613ec2ac1e0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 22249472 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 77 heartbeat osd_stat(store_statfs(0x4fcc6a000/0x0/0x4ffc00000, data 0x10cbf1a/0x1150000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 22216704 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 77 ms_handle_reset con 0x5613edb41400 session 0x5613ea5e7680
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 22183936 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 22151168 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.674486160s of 10.168670654s, submitted: 142
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 79 ms_handle_reset con 0x5613ee379000 session 0x5613eaf89e00
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 79 ms_handle_reset con 0x5613ee378c00 session 0x5613ea5e74a0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 79 ms_handle_reset con 0x5613ece8c800 session 0x5613eb003e00
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 22044672 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 672501 data_alloc: 218103808 data_used: 77824
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 22044672 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 22044672 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 80 heartbeat osd_stat(store_statfs(0x4fcc65000/0x0/0x4ffc00000, data 0x10cebaf/0x1158000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 22118400 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 22118400 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 22011904 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 81 ms_handle_reset con 0x5613ed315c00 session 0x5613eac6d2c0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683009 data_alloc: 218103808 data_used: 77824
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 81 heartbeat osd_stat(store_statfs(0x4fcc5d000/0x0/0x4ffc00000, data 0x10d16d0/0x1160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 20881408 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 82 ms_handle_reset con 0x5613edb41400 session 0x5613eaf894a0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fcc5d000/0x0/0x4ffc00000, data 0x10d16d0/0x1160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fbab8000/0x0/0x4ffc00000, data 0x10d2c9a/0x1164000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 20807680 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 83 ms_handle_reset con 0x5613ee379000 session 0x5613ecd1ab40
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69312512 unmapped: 20742144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 84 ms_handle_reset con 0x5613ee379400 session 0x5613eb003860
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 20643840 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.775205612s of 10.068033218s, submitted: 94
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 85 ms_handle_reset con 0x5613ece8c800 session 0x5613ea5e6960
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fbaab000/0x0/0x4ffc00000, data 0x10d86e0/0x1171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 20619264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fbaab000/0x0/0x4ffc00000, data 0x10d86e0/0x1171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 86 ms_handle_reset con 0x5613ed315c00 session 0x5613eb994960
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 710759 data_alloc: 218103808 data_used: 77824
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 20635648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 86 ms_handle_reset con 0x5613edb41400 session 0x5613ecd1ad20
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 20635648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 86 ms_handle_reset con 0x5613ee379000 session 0x5613ecd1a780
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 20635648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 86 ms_handle_reset con 0x5613edb63000 session 0x5613ecd1a000
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 86 heartbeat osd_stat(store_statfs(0x4fbaa6000/0x0/0x4ffc00000, data 0x10d9caa/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 20619264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 87 ms_handle_reset con 0x5613ece8c800 session 0x5613ecd1b4a0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 20578304 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 88 ms_handle_reset con 0x5613ed315c00 session 0x5613eada63c0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 716900 data_alloc: 218103808 data_used: 77824
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 88 ms_handle_reset con 0x5613edb41400 session 0x5613eb0032c0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 20430848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 89 ms_handle_reset con 0x5613edb63000 session 0x5613ec2e0b40
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 89 heartbeat osd_stat(store_statfs(0x4fba9d000/0x0/0x4ffc00000, data 0x10dd738/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 20406272 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 90 ms_handle_reset con 0x5613edb63800 session 0x5613eb982b40
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 20316160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 91 ms_handle_reset con 0x5613ece8c800 session 0x5613eac6cb40
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 20250624 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.562761307s of 10.121302605s, submitted: 134
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 92 ms_handle_reset con 0x5613edb63000 session 0x5613eada6000
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 92 ms_handle_reset con 0x5613ed315c00 session 0x5613eac6cf00
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 92 ms_handle_reset con 0x5613edb41400 session 0x5613ebe063c0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 19079168 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 726803 data_alloc: 218103808 data_used: 90112
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 93 ms_handle_reset con 0x5613edb63c00 session 0x5613ebd4cd20
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba9a000/0x0/0x4ffc00000, data 0x10e215e/0x1183000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 19185664 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 93 ms_handle_reset con 0x5613ece8c800 session 0x5613ea5e7c20
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 94 ms_handle_reset con 0x5613ed315c00 session 0x5613ec2ade00
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 19193856 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 19193856 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 95 ms_handle_reset con 0x5613edb41400 session 0x5613eb983860
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 19177472 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb63000 session 0x5613ea3f03c0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 19267584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 738425 data_alloc: 218103808 data_used: 86016
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fba8f000/0x0/0x4ffc00000, data 0x10e67de/0x118d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 19267584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fba8f000/0x0/0x4ffc00000, data 0x10e67de/0x118d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 19267584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613eb862400 session 0x5613eaf12d20
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613ece8c800 session 0x5613eaf12b40
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613ed315c00 session 0x5613eaf125a0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 19267584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb41400 session 0x5613ecd1a000
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb4c000 session 0x5613ecd1ad20
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb63000 session 0x5613ecd1a780
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613ece8c800 session 0x5613eb994960
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613ed315c00 session 0x5613eb994780
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb41400 session 0x5613eb002b40
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb4c000 session 0x5613eb003860
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 18186240 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 18186240 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 738018 data_alloc: 218103808 data_used: 86016
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.961263657s of 11.402002335s, submitted: 155
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 97 ms_handle_reset con 0x5613ecba0000 session 0x5613eac6cf00
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 17940480 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 17940480 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 97 heartbeat osd_stat(store_statfs(0x4fba69000/0x0/0x4ffc00000, data 0x110bcb6/0x11b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 17924096 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 97 heartbeat osd_stat(store_statfs(0x4fba69000/0x0/0x4ffc00000, data 0x110bcb6/0x11b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 17924096 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 97 heartbeat osd_stat(store_statfs(0x4fba69000/0x0/0x4ffc00000, data 0x110bcb6/0x11b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 97 ms_handle_reset con 0x5613edb41400 session 0x5613ec2ad4a0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 17874944 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746531 data_alloc: 218103808 data_used: 102400
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 97 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 17825792 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 98 ms_handle_reset con 0x5613edb4c000 session 0x5613eb994b40
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 98 ms_handle_reset con 0x5613edb63c00 session 0x5613ea5e7680
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 98 ms_handle_reset con 0x5613edb62000 session 0x5613ecd1a000
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 98 ms_handle_reset con 0x5613edb62400 session 0x5613eb983c20
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 17620992 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 99 ms_handle_reset con 0x5613edb41400 session 0x5613ebd4d680
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 17596416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613edb63c00 session 0x5613eada3680
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613ec2d8400 session 0x5613eada25a0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613ec2d9400 session 0x5613eada2f00
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 17530880 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613ee36e000 session 0x5613eada30e0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613ee36e000 session 0x5613eaf89860
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fba5e000/0x0/0x4ffc00000, data 0x110fe54/0x11bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 17506304 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 757056 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 17506304 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.367811203s of 10.547314644s, submitted: 63
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 101 ms_handle_reset con 0x5613ec2d8400 session 0x5613eb139860
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 17489920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 101 ms_handle_reset con 0x5613ece8c800 session 0x5613ec2e1e00
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 101 ms_handle_reset con 0x5613ed315c00 session 0x5613ebd4de00
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 17539072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 101 heartbeat osd_stat(store_statfs(0x4fba5e000/0x0/0x4ffc00000, data 0x111146e/0x11bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 17539072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 102 ms_handle_reset con 0x5613ec2d9400 session 0x5613eb994d20
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 17448960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 763148 data_alloc: 218103808 data_used: 106496
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 17440768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 17440768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 17424384 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fba7b000/0x0/0x4ffc00000, data 0x10eff54/0x11a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 17424384 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 103 ms_handle_reset con 0x5613ee379000 session 0x5613eada74a0
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 103 ms_handle_reset con 0x5613edb63400 session 0x5613eb983a40
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 103 ms_handle_reset con 0x5613ec2d8400 session 0x5613eaf16000
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 764913 data_alloc: 218103808 data_used: 114688
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fba7a000/0x0/0x4ffc00000, data 0x10f13e5/0x11a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 104 ms_handle_reset con 0x5613ece8c800 session 0x5613eac0f860
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.541474342s of 10.000211716s, submitted: 137
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 105 ms_handle_reset con 0x5613ed315c00 session 0x5613eac0e780
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fba7c000/0x0/0x4ffc00000, data 0x10f13c2/0x11a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fba79000/0x0/0x4ffc00000, data 0x10f298e/0x11a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765885 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fba79000/0x0/0x4ffc00000, data 0x10f298e/0x11a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765885 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.993864059s of 12.120968819s, submitted: 33
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 17195008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'config diff' '{prefix=config diff}'
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'config show' '{prefix=config show}'
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 16842752 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 16777216 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'log dump' '{prefix=log dump}'
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 27811840 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'perf dump' '{prefix=perf dump}'
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'perf schema' '{prefix=perf schema}'
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 27656192 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 27656192 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 27656192 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 27656192 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 27598848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 19:06:40 np0005535838 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
